Skip to main content

Knime Bevegelse Gjennomsnittet


Beslutningsstyringssystemer Platform Technologies Report Versjon 7, oppdatering 4, 25. mars 2016 Organisasjoner vedtar en ny klasse operativsystemer kalt Decision Management Systems for å imøtekomme kravene til forbrukere, regulatorer og markeder fordi tradisjonelle systemer er for ufleksible, ikke klarer å lære og tilpasse og avgjørende kan ikke bruke analytics for å dra nytte av store data. Decision Management Solutions gjennomfører kontinuerlig forskning på de stadig mer robuste teknologiplattformene som er tilgjengelige for å bygge denne nye klassen av systemet. Regissert av James Taylor, administrerende direktør, omfatter denne rapporten forretningssystemer for styringssystemer, prediktive analytiske arbeidsbenker og optimaliseringsteknologier som brukes alene eller i kombinasjon for å bygge egendefinerte beslutningsstyringssystemer, samt analyser i databasen og annen analytisk infrastruktur som kan brukes til å maksimere effektivitet av prediktiv analyse. Dette er den syvende versjonen av denne rapporten. En ny del av Analytics Capability Landscape er lagt til. Første utseende blir også lagt ut på JTonEDM etter hvert som de er ferdige. Rapportens fullstendige tekst er tilgjengelig på nettet og som PDF. Du kan navigere både med innholdsfortegnelsen. Hver ny versjon av rapporten vil bli gjort tilgjengelig her. Abonner på vårt nyhetsbrev for rapportoppdateringer og flere beslutningshåndterings nyheter. Innholdsfortegnelse Innledning Organisasjoner vedtar en ny klasse operativsystemer som heter Beslutningsstyringssystemer for å møte kravene til forbrukere, regulatorer og markeder fordi tradisjonelle systemer er for ufleksible, ikke klarer å lære og tilpasse og avgjørende, kan ikke bruke analyser for å dra nytte av Big Data . Decision Management Systems er fleksible, analytiske og adaptive. De er smidige, slik at de raskt kan endres for å takle nye regler eller forretningsforhold. De er analytiske, og setter en organisasjonsdata på jobb for å forbedre kvaliteten og effektiviteten av beslutninger. De er adaptive, lærer fra det som fungerer og hva som ikke virker for å kontinuerlig forbedre seg over tid. Beslutningsstyringssystemer er bygget ved å fokusere på repeterbare, operasjonelle beslutninger som påvirker individuelle transaksjoner eller kunder. Når disse beslutningene er oppdaget og modellert, bygges beslutningstjenester som legitimerer organisasjonens foretrukne beslutningstaking i operative programvarekomponenter. Utførelsen av disse komponentene, og virkningen av denne ytelsen på total organisatorisk ytelse, spores, analyseres og tilbys til å forbedre effektiviteten av beslutningsprosessen. Beslutningsstyringssystemer gir høy avkastning fordi de forbedrer risikostyringen og tilpasningen av pris til risiko fordi de reduserer eller eliminerer svindel og avfall fordi de øker inntektene ved å få mest mulig ut av enhver mulighet og fordi de forbedrer utnyttelsen av begrensede ressurser over hele organisasjon. Avgjørelsesstyringssystemer er forskjellige fra tradisjonelle virksomhetsapplikasjoner og fra forretningsprosess eller hendelsesbaserte systemer. Etablerte tilnærminger og teknologier spiller en rolle i utviklingen av Decision Management Systems. Brukes alene, men disse teknologiene og tilnærmingene har en tendens til å levere systemer som er ufleksible, statiske og ugjennomsiktige. For å oppfylle løftet om fleksible og adaptive systemer som fullt ut utnytter store data, vil organisasjoner måtte utvide sin virksomhetsarkitektur for å inkludere evner fra de påviste teknologiene som er beskrevet i denne rapporten. Testet og etablert i mange bransjer, er teknologier som er egnet for å utvikle beslutningssystemer, forretningssystemer, datautvinning eller Predictive Analytical Workbenches og Optimization-suiter, samt ny analytisk infrastruktur i databasen og mer. Organisasjoner må velge de som har de evnene de trenger, som demonstrerer Best Practices Best Practice Management, og som passer organisasjonene arkitektur og bruk saker. Denne rapporten beskriver disse produktkategorier og identifiserer nøkkelegenskapene til disse teknologiene. Beste praksis i bruk og bruk av nøkkelbruk identifiseres og diskuteres. En omfattende liste over leverandører i markedet er gitt, og et vedlegg gir mer detaljert informasjon om beslutningssystemer. Hvis du er ny på beslutningssystemer, gir vedleggsbeslutningssystemer en oversikt over denne klassen av systemer. Også avsnittet om brukstilfeller vil illustrere noen mulige bruksområder av beslutningshåndteringssystemer og forfatterens siste bok, beslutningshåndteringssystemer: en praktisk veiledning for bruk av forretningsregler og forutsigende analyser (IBM Press, 2012) har en mer komplett liste. De som er kjent med beslutningsstyringssystemer og vurderer nye teknologivalg som en del av å bygge egen, bør begynne med beslutningshåndteringssystemplattformskapasiteter og deretter flytte til leverandører for å finne kandidatleverandører å vurdere. Seksjonen om viktige egenskaper samt delen Velge leverandører bør også være nyttig. Enten kjent med eller ikke med Decision Management Systems, er delen Best Practices i Decision Management Systems verdt å lese før et nytt prosjekt. Denne rapporten er forårsaket av den økende interessen av organisasjoner i Decision Management Systems. Det er alltid utfordrende å tegne grenser rundt et slikt spennende og voksende område, men for praktiske formål må det være slik. For denne rapporten er vi fokusert på plattformteknologier som brukes til å bygge tilpassede beslutningsstyringssystemer, og vårt mål er å være omfattende innenfor dette omfanget. Mange leverandører har utviklet kraftige forhåndskonfigurerte beslutningsstyringssystemer som er fokusert på å løse spesifikke beslutningsproblemer som låneforsikring, skadebehandling eller krysskanalmarkedsføring. For mange organisasjoner er disse løsningene ideelle, men de er ikke i fokus for denne rapporten. Tilsvarende er det leverandører som bygger egendefinerte beslutningsstyringssystemer for sine kunder, og som har utviklet kraftige plattformer for å gjøre det. Hvis en slik plattform ikke er til salgs for de som bygger sine egne løsninger, er det ute av bruk for denne rapporten. I begge disse scenariene er rapportene diskusjoner om hva slags funksjonalitet som er nyttig, beste praksis og egenskaper for passende produkter, muligens nyttig ved valg av leverandører, men noen tolkning vil være nødvendig. Når du for eksempel vurderer forhåndsdefinerte beslutningsstyringssystemer, kan det hende at en diskusjon av forretningsstyreadministrasjonsfunksjoner kan være relevant, mens en om å koble til datakilder, kanskje ikke er. Hvis du kjenner til andre produkter du mener skal inkluderes eller har annen tilbakemelding, vennligst gi oss beskjed ved å sende oss en email infodecisionmanagementsolutions. Dette er den syvende versjonen av denne rapporten. En ny del av Analytics Capability Landscape er lagt til. Leverandører og produkter innenfor rapportomfanget vil bli lagt til kontinuerlig. Første utseende blir også lagt ut til jtonedm etter hvert som de er ferdige. Denne rapporten kan fritt sirkuleres, skrives ut og reproduseres i sin helhet, forutsatt at det ikke gjøres endringer. Vennligst send e-post infodecisionmanagementsolutions hvis du vil publisere et utdrag. Sitater fra denne rapporten skal tilordnes og identifiseres riktig som 2015, Decision Management Solutions. Selv om enhver forsiktighet er tatt for å validere informasjonen i denne rapporten, tar Decision Management Solutions ikke ansvar for innholdet i denne rapporten eller for konsekvensene av eventuelle tiltak som er truffet på grunnlag av informasjonen som er gitt. Beslutningsstyringssystem Platform Capabilities Fire aspekter ved å bygge et beslutningsstyringssystem driver organisasjoner for å vedta nye, spesifikke beslutningstilsynssystemer: Behandling av beslutningslogikk for gjennomsiktighet og smidighet Embedding predictive analytics for analytisk beslutningsprosesser Optimalisering av resultater gitt real-world trade-offs og simulere resultater Overvåking og forbedring av beslutningsprosesser over tid I ​​denne delen presenterer vi disse fire evnene og legger dem i en bredere sammenheng. Etterfølgende avsnitt beskriver egenskapene mer detaljert. Behandle avgjørelseslogikk Som alle informasjonssystemer krever Decision Management Systems definisjonen av logikken som skal brukes under operasjoner. I beslutningshåndteringssystemer er denne logikken først og fremst beslutningsprosessen hvordan en bestemt beslutning bør fattes gitt systemforståelsen av den nåværende situasjonen. Beslutningssystemer må imidlertid være mer fleksible enn tradisjonelle informasjonssystemer, slik at denne logikken ikke kan administreres som kode. Bruk av kode for å definere beslutningslogikk gjør logikken ugjennomsiktig til de på virksomhetssiden som forstår hvordan beslutningen skal gjøres. Det gjør det også vanskelig å registrere nøyaktig hvordan en beslutning ble tatt som opptak nøyaktig hvilken kode som ble utført, er ofte problematisk. For å håndtere logikk på denne måten, vil de fleste organisasjoner vedta et forretningssystem for styringssystemer eller et produkt som inneholder tilsvarende funksjonalitet. Beslutningsstyringssystemer krever at beslutningslogikken styres på en måte som gir design gjennomsiktighet, så det er klart hvordan avgjørelsen skal gjøres og gjennomføringsgjennomgang, så det er klart hvordan hver bestemt beslutning ble gjort. Embedding Predictive Analytics Behandlingen av beslutningslogikk er en grunnleggende evne til beslutningshåndteringssystemer. De fleste beslutningstilsynssystemer bør også utnytte informasjonen som er tilgjengelig for en organisasjon for å forbedre nøyaktigheten og effektiviteten til hver beslutning. I motsetning til menneskelige beslutningstakere kan Decision Management Systems ikke bruke visualiserings - og rapporteringsteknologi for å forstå tilgjengelig informasjon. I tillegg, mens folk har en god evne til å ekstrapolere fra informasjon om fortiden for å se hva som kan skje i fremtiden, behandler systemer data veldig bokstavelig. For å maksimere verdien av tilgjengelig informasjon når det gjelder bedre beslutningsprosesser, må beslutningsstyringssystemer derfor legge inn prediktive analytiske modeller utledet fra historiske data ved hjelp av matematiske teknikker. Slike modeller gjør vurderinger av sannsynligheten for at noe vil være sant i fremtiden og gjøre denne vurderingen tilgjengelig for beslutningslogikken i et beslutningssystem, slik at beslutninger kan tas i denne konteksten. Dette skiftet fra å presentere data til mennesker, slik at de kan skaffe seg innsikt fra det, å eksplisitt integrere analytisk innsikt i systemer som bruker prediktive analytiske teknikker, betyr at organisasjoner må vedta flere teknologier for å analysere dataene sine. Spesielt trenger de å vedta en Predictive Analytical Workbench eller tilsvarende funksjonalitet. De kan også velge å vedta ytterligere analytisk infrastruktur. Optimalisering og simulering Mange avgjørelser stole på ressurser som ikke er ubegrensede. Hvorvidt disse ressursene er medarbeidere, produktinventar eller tjenestekapasitet, må beslutninger ofte gjøres i sammenheng med et begrenset sett med ressurser. Organisasjoner vil generelt ønske å optimalisere sine resultater gitt disse begrensningene, og det betyr at avvik må gjøres. Organisasjoner må vedta optimaliserings - og simuleringsteknologier for å håndtere avvik og sikre at beslutninger gjøres på en måte som gir best mulig resultat gitt begrensningene i beslutningsprosessen. Disse teknologiene tillater modellering av begrensningene og avvikene, og bruker deretter matematiske teknikker til å velge sett av utfall som vil maksimere fordelen for en organisasjon. Disse modellene kan også brukes til å kjøre simuleringer av ulike scenarier for å se hvilke som vil gi det beste resultatet for organisasjonen. En organisasjon som er etablert i å utvikle Decision Management Systems, vil til slutt vedta teknologier for alle disse evner avgjørelseslogikkhåndtering, prediktiv analytisk innsikt og optimalisering og simulering. Noen vil finne det nyttig å ha mer enn ett produkt med samme type evne, noen vil standardisere på et enkelt produkt. Produktene trenger ikke å bli vedtatt på en gang, og noen beslutningssystemer krever bare noen av evnene. Overvåkningsbeslutninger Beslutningens art er at det ofte ikke er mulig å fortelle hvor god en beslutning vil vise seg å være i noen tid. Som et resultat er den løpende overvåking av beslutninger som er gjort og deres resultater virkelig viktig. Slik overvåkning gjør at beslutningstaking systematisk kan forbedres over tid, både ved å spore beslutningsprosesser og gjøre endringer når denne ytelsen er utilstrekkelig og ved å gjennomføre eksperimenter og analysere resultatene av disse forsøkene. De fleste organisasjoner vil finne at de vil bruke sin eksisterende Performance Management og data infrastruktur for å utføre mye av denne analysen. Imidlertid vil bruk av avgjørelseslogikken og prediktive analytiske evner diskutert ovenfor også være nødvendig. Dette vil muliggjøre en eksplisitt logging av beslutningsprosesser og utfall, samt tillate enkel styring av eksperimenter i beslutningsprosesser. Generelt krever denne løpende beslutningsanalyse designbeslutninger og integrasjon med eksisterende infrastruktur i stedet for flere teknologier. Disse evnene kommer sammen i en overordnet plattform for å bygge beslutningshåndteringssystemer som vist på figur 1: Evnen til en plattform for beslutningshåndteringssystemer nedenfor. Avgjørelseslogikkfunksjoner tillater redigering av forretningsreglene som representerer beslutningslogikken. Disse forretningsreglene blir distribuert til en beslutningstjeneste for utførelse. Prediktive analytiske evner tillater at data analyseres og omdannes til enten ytterligere forretningsregler (som representerer det som har fungert tidligere og vil trolig fungere i fremtiden) eller prediktive analytiske modeller som kan distribueres enten til en beslutningstjeneste eller til operasjonelle datastore blir brukt av beslutningstjenesten. Figur 1: Evnen til en plattform for beslutningsstyringssystemer Simulerings - og optimaliseringsfunksjoner brukes til å håndtere avvik og begrensninger og kan resultere i bedriftsregler som er optimalisert, optimaliseringsmodeller som kan løses i en beslutningstjeneste eller et eksplisitt sett med handlinger som skal trekkes inn i en operativ datastore for å kjøre oppførsel. Alle tre sett med evner er avhengige av datainfrastruktur for å levere test og historiske data, mens de prediktive analytiske evner kan benytte seg av in-database modellering og scoring. Beslutningstjenesten selv kan utføre forretningsregler, score poster ved bruk av prediktive analysemodeller, løse begrensningsoptimaliseringsproblemer og potensielt justere prediktive analytiske modeller for å forbedre deres prediktive kraft under bruk. Alle evner kan forbruke aktuell informasjon generert av en beslutningstjenester evne til å logge sin beslutningsprosess. Leverandører med alle disse evnene kunne produsere et produkt som gir alle fire evner i et enkelt, integrert miljø. Men fordi de mulighetene som diskuteres ovenfor, kan brukes til mer enn å bygge beslutningsstyringssystemer, er det sannsynlig at noen leverandører vil fortsette å pakke opp hver mulighet som et eget produkt, mens de integreres enda mer tett for å gjøre det lettere å bruke dem som en sett. Andre leverandører vil forbli fokusert på et bestemt område av evner og vil samarbeide med partnere og standardorganisasjoner for å sikre at andre evner kan integreres med deres. Hvilke organisasjoner må bygge beslutningshåndteringssystemer er en evne til å håndtere beslutningslogikk, skape og legge til rette for prediktiv analytisk innsikt og simulere optimaliseringsutfall. Hvordan best å samle denne funksjonaliteten vil være forskjellig for ulike organisasjoner. Alle de forskjellige distribusjonsmulighetene beskrevet ovenfor resulterer i at kode eller pakker med definisjon blir distribuert til et beslutningstjenestemiljø. Dette er vanligvis et konseptuelt miljø som i praksis består av elementer av flere produkter. Dette miljøet må kunne utføre ulike elementer, vanligvis når det påberopes ved hjelp av en standard API. Selve avgjørelsen er laget ved å utføre generert kode på den underliggende plattformen, forretningsregler for en distribuert forretningsregelmotor, optimaliseringsmodeller på en løsningsmodell og prediktive analysemodeller på en modellutførelsesmotor. Beslutningstjenester må også kunne logge på hva som skjedde hver gang det ble avgjort hvilke regler som ble avfyrt, hvilke modellresultater ble beregnet og hvilke utfall som ble valgt av optimaliseringsmodellen. Igjen dette loggingen involverer ofte elementer av flere produkter, men konseptuelt kan en enkelt logg genereres. Endelig kan modellavstemming være tilgjengelig i beslutningstjenesten, med en del analytisk modelleringskode som brukes for å overvåke ytelsen til distribuerte modeller og gjennomføre eksperimenter for å se hvordan den prediktive ytelsen til disse modellene kan forbedres. Disse teknologiene brukes til å utvikle og administrere avgjørelseslogikken, prediktive analysemessige innsikt og optimaliseringsmodeller som kreves av et beslutningsstyringssystem og distribuere dette til en beslutningstjeneste. Beslutningstjenester opererer i en bredere IT-kontekst, men som vist i figur 2: Beslutningstjenester og avgjørelsesanalyse i en bredere arkitektonisk kontekst nedenfor. Beslutningstjenester påberopes for beslutningstaking i en applikasjonskontekst. Denne applikasjonskonteksten er i stadig større grad en prosess som styres av et Business Process Management System. Beslutningstjenester kan også påberopes av bedriftsapplikasjoner både pakket og eldre. Selv om dette er mindre vanlig enn påkalling fra en forretningsprosess, er det på ingen måte et uvanlig mønster. Et videre voksende mønster er bruken av en beslutningstjeneste for å støtte en kontekst for hendelsesbehandling, ta en beslutning som svar på et mønster av forretningsarrangementer og deretter kaste av en forretningsprosess eller annen tjeneste som et resultat. I hvert scenario oppfyller søknadskonteksten et samlet forretningsbehov, og den påtalte beslutningstjenesten forbedrer effektiviteten og effektiviteten. Beslutningstjenester er ikke frittstående systemer som kjører på spesialisert maskinvare eller unike plattformer. I stedet kjører de på de standardbedriftsplatformene som er i bruk i dag. Ulike teknologier støtter de ulike applikasjonsserverne, tjenesteorienterte plattformene og programmeringsmetaforene som er vanlige, og beslutningstjenester kan utvikles som kjører på en slik plattform. Beslutningstjenester stole også på en moderne datainfrastruktur. Denne datainfrastrukturen leverer operasjonsdata til beslutningstjenestene og kan også gi analytisk scoring i databasen. Business Intelligence-evner bruker vanligvis samme datainfrastruktur, gir innsikt til menneskelige beslutningstakere. Selv om dette ikke er nødvendig for å utvikle avgjørelsestjenester, kompletterer virksomheten intelligens ofte disse systemene ved å støtte håndteringen av unntak. Figur 2: Beslutningstjenester og avgjørelsesanalyse i en bredere arkitektonisk kontekst Beslutningstjenester kan også bruke infrastruktur for næringsopplysninger for å gi ytterligere kontekst når du returnerer flere alternativer. Endelig må ytelsen til beslutningstjenestene overvåkes for å støtte kontinuerlig forbedring. Denne løpende beslutningsanalysen krever begge muligheter innen Beslutningstjenesten, for eksempel støtte for både en mester og en utfordrer tilnærming til en enkelt beslutning, og integrering med mer typiske forretnings - eller bedriftsytelsesstyringsfunksjoner som dashbord og varslingssystemer. Det finnes flere produktkategorier i markedet for beslutningslogikkhåndtering, prediktiv analyse, optimalisering og simulering. Som figur 3: Overlappende produktkategorier under viser, overlapper disse produktkategoriene ofte. Figur 3: Overlappende produktkategorier Mens det er mange Business Rules Management Systems som bare styrer beslutningslogikk, er det også produkter som kombinerer styring av beslutningslogikk med optimalisering eller med å bygge prediktive analytiske modeller. Det er også produkter som kalles Beslutningshåndterere eller Forretningsbeslutningssystemer som styrer beslutningslogikk og andre beslutningsstyringsprodukter som styrer beslutningslogikk og bygger prediktive analytiske modeller. Noen Prediktive Analytiske arbeidsbenker inkluderer in-database scoring evner, noen pakker dette separat mens modell overvåking og tuning er likevel noen ganger pakket separat. Navigere produkter for å håndtere beslutningslogikk Når man vurderer produkter for styring av avgjørelseslogikk, er det to hovedområder av potensiell forvirringsbeslutningslogikk hovedfokus og er produktets regler-sentriske eller beslutningssentriske Decision Logic som hovedfokus. Den første er i hvilken grad produktet er eksplisitt fokusert på å håndtere all logikken for en potensielt kompleks beslutning, i stedet for å styre noen beslutningslogikk, slik at den kan kombineres med noe analytisk innsikt. For eksempel kan en rekke produkter med fokus på å bygge og distribuere analytiske modeller også gi deg mulighet til å håndtere noen forretningsregler. Disse er vanligvis enten fokusert på kvalifikasjoner eller cut-offs. Kvalitetsregler kan velge en delmengde av alle mulige poster i et sett før du bruker analytiske modeller til dem, eller bestemme at bare visse utfall er tillatt for en gitt post, uansett hva modellen kan forutsi. Klipp av regler gjør generelt prediktive score til enkle handlinger basert på klart definerte verdier. Slike evner er mye å være ønsket i analytiske produkter, men de vil ikke tillate en organisasjon å håndtere antall forretningsregler som er involvert, for eksempel i komplekse kvalifikasjonsbeslutninger. Fordi de antar at analyser er kjernen i en beslutning, er de heller ikke sannsynlig at de er effektive hvis de brukes til å håndtere beslutningslogikk for beslutninger som helt og holdent drives av policy, regulering og beste praksis som derfor ikke har noen analytisk komponent. For disse vedtakene vil et produkt som enten primært er fokusert på beslutningslogikk eller som angår eksplisitt logikk og analytisk innsikt som jevnaldrende i beslutningsprosessen, være mer hensiktsmessig. Slike produkter er mer sannsynlig å bli referert til som et Forretningsregler Management System eller en Beslutningssjef. Noen beslutningshåndteringsplattformer behandler de to som jevnaldrende, mens produkter som er fokusert på Analytical Decision Management, er mer sannsynlig å fokusere først og fremst på analyse. Regler-sentriske eller beslutningssentriske. Den andre er en mer nyansert vurdering. Noen produkter for styring av beslutningslogikk begynte som ekspertsystemer eller fokusert på styring av forretningsregler. Mens brukerne av disse systemene alltid brukte dem til å automatisere beslutninger, var dette ofte implisitt i implementeringen av reglene heller enn eksplisitt i designet. Andre produkter har begynt med fokus på forretningsprosess og økt forretningsreguleringskapasitet og utviklet seg til et mer beslutningssentrisk fokus. Slike produkter pleide å referere til seg selv som forretningsmessige regler motorer da som Business Rules Management Systems og nå, i økende grad, som Decision Management-produkter. I motsetning til dette har andre verktøy begynt med et eksplisitt fokus på beslutninger. Disse tillater vanligvis en beslutning å bli dokumentert som sådan, inkludert innganger og utganger, før beslutningslogikken er spesifisert for å fylle gapet mellom innganger og utganger. Disse har alltid brukt beslutning i navnene sine og kalles ofte beslutningsledere eller bruker beslutningshåndtering i navnene sine. Noen, fra selskaper med et sterkt analytisk fokus, kan også referere til Decision Analytics også. Dette fokuset kan bare være en navngivende ting, med tilsvarende produkter som har forskjellige navn, men det kan også gjenspeile en subtil, men viktig vekt på beslutninger om forretningsregler som det primære organisasjonsprinsippet til et produkt. Navigere produkter for å utvikle analytisk innsikt På mange måter er produktene som gir støtte til utvikling av analytisk innsikt enklere. De fleste slike produkter er beskrevet enten som Data Mining Workbenches eller Predictive Analytic Workbenches. Disse er ofte enkle å sammenligne med hverandre og tilbyr stort sett sammenlignbare evner. Noen slike produkter er smalt fokusert, og tilbyr et lite antall analytiske algoritmer eller støtte for en bestemt type data. Mer vanlig er arbeidsbenker som støtter et bredt spekter av slike teknikker og datakilder. De eneste områdene av potensiell forvirring kommer til støtte for modelladministrasjon og analyse i databasen og den potensielle støtten til noen generelle mål for beslutningshåndteringsplattformer for begrenset utvikling av analytisk innsikt. Modellstyring. Noen analytiske produkter inkluderer evner for modelladministrasjon i samme produkt som brukes til å bygge analytiske modeller, noen pakker det opp som en egen evne. Det er også en liten gruppe produkter designet eksplisitt for modelladministrasjon. Det er ingen spesiell fordel eller ulempe for tilnærmingene, men å ha et mer webbasert og mindre analytisk faglig orientert miljø for modelladministrasjon kan være tiltalende. Noen av disse modelladministrasjonsfunksjonene støtter modeller bygget ved hjelp av en rekke analytiske verktøy. Denne støtten for å overvåke og administrere modeller som ble bygget ved hjelp av flere verktøy, er en betydelig differensier, uansett om den er pakket med evnen til å bygge modeller eller ikke. In-database analytics. På samme måte pakker noen analytiske arbeidsbenker opp støtte for analyse i databasen (enten utvikling eller distribusjon eller begge deler) med arbeidsbenken mens andre selger den som en egen evne. Når man vurderer et arbeidsbenk fra et funksjonsmessig perspektiv, er det viktig at verktøyet8217s støtter databasene som er i bruk i organisasjonen og integrasjonsdybden. Emballasje kan påvirke prisingen, men det påvirker generelt ikke evnen. Beslutningssentrisk analyse. Noen produkter som primært er fokusert på styring av beslutningslogikk, gir muligheter for å utvikle analytisk innsikt. Noen tilbyr hva som kan beskrives som data mining for forretningsregler, slik at data mining algoritmer som produserer beslutning trær eller foreningsregler som skal brukes i verktøyet for å finne passende regler fra historiske data. Noen tilbyr data mining algoritmer integrert med en beslutningstreet editor for data-drevet strategi design. Begge slike evner er svært ønskelige, og bruken av data mining for å finne forretningsregler er en klar beste praksis (diskutert i avsnittet om analytisk og det samarbeid). Ikke desto mindre tilbyr disse verktøyene ikke samme utvalg av analytiske innsiktsfunksjoner som et spesialverktøy. Noen av disse plattformene går litt lenger og tilbyr automatiserte byggemuligheter for analytisk modell også. Disse begynner å konkurrere mer direkte med ren-play analytiske arbeidsbenker, spesielt for organisasjoner fokusert på beslutninger utenfor regulerte kredittindustrier som er komfortable med automatiserte modelleringsmetoder. De fleste brukerne av disse verktøyene finner fremdeles av og til en eller annen grunn til å bruke en ren spilleanalyse-arbeidsbenk. Navigere produkter for optimalisering De store forskjellene mellom optimaliseringsprodukter er et løsningsfokus mot et verktøyfokus og graden av verktøy tilgjengelig. Løsning eller verktøyfokus. Fordi optimalisering kan være komplisert å konfigurere og bruke, bruker mange organisasjoner optimaliseringsteknologi som en del av en løsning. I denne tilnærmingen er optimaliseringsmodellen forhåndskonfigurert med integrerte rapporterings - og simuleringsgrensesnitt fokusert på løsningen. Disse kan adressere et planleggingsproblem, forsyningskjedeproblemer eller produktkonfigurasjon. I motsetning til dette betyr et verktøyfokus et produkt for optimalisering som kan brukes til å løse et problem, men det må konfigureres før det gjør noe. Fordi mange av de forhåndsdefinerte løsningene er bygd på et bestemt verktøy, kan organisasjoner ofte starte med en forhåndskonfigurert løsning og deretter utvide bruken ved å også anskaffe det underliggende verktøyet. For noen organisasjoner er det imidlertid bare ett problem som synes å rettferdiggjøre optimalisering, og de vil sannsynligvis være fornøyd med et enkeltløsningsfokusert tilbud Solver eller arbeidsbenk. Noen optimaliseringsprodukter er egentlig bare et sett med løsere med veldefinerte APIer, mens andre tilbyr en komplett arbeidsbenk med feilsøkingsverktøy og grafiske grensesnitt. Solver-only-tilnærmingen gjør det mulig for verktøyutvikleren å fokusere på ytelse og skalerbarhet mens du støtter utøvere som ønsker å bruke et bestemt problemdefinisjons språk eller redigeringsmiljø. En mer komplett arbeidsbenk har en tendens til å være mer støttende for mindre tekniske brukere og å involvere mindre arbeid å sette opp på bekostning av å være noe mer begrenset når det gjelder hvordan en utøver kan nærme seg å definere problemet. Administrere beslutningslogikk Det første kravet er et komplett sett med programvarekomponenter for opprettelse, testing, styring, distribusjon og løpende vedlikehold av logikken til en beslutningsprosess i et driftsmiljø. Det vanligste produktkategorinavnet for denne funksjonen er et Business Rules Management System. I forbindelse med denne rapporten er vi bare opptatt av eksekverbare logikk, eksekverbare forretningsregler som er med forretningsregler som er definert på et nivå som gjør at de kan utføres i et datasystem. Forretningsregler kan defineres og styres som kravtilnærming og for å sikre konsistens og nøyaktighet i manuell beslutningsprosess, men dette er ikke fokus for denne rapporten. Vanligvis er en eksekverbar forretningsregel bare en redegjørelse for hva som skal gjøres hvis et gitt sett av forhold er sanne. Hver regel har et betinget element som kan vurderes på et øyeblikk for å se om det er sant eller falskt, så vel som en eller flere handlinger å ta hvis det er sant. Disse handlingene kan være like forskjellige som å sende e-post eller påkalle funksjoner, men generelt involvere innstillingsdataverdier. I de fleste forretningsstyringssystemer kan hver regel også ha en eier, notater, versjonshistorikk og andre metadata som beskriver den. Administrerte kjørbare forretningsregler gir mange fordeler i forhold til tradisjonell kode, særlig når du automatiserer og håndterer avgjørelser. Forretningsregler er enklere for ikke-tekniske forretningseksperter å lese, forbedre businessIT-samarbeid og forbedre nøyaktigheten av forretningsregler i forhold til kode. Dette gjelder spesielt fordi forretningsregler også kan representeres i en rekke grafiske og tabellformede metaforer. Forretningsregler er deklarative, slik at hver enkelt kan forvaltes uavhengig og forenkler styringen og gjenbruk av beslutningstakerlogikken samtidig som det tillates mer nøyaktig og granulær vurdering av konsistens, fullstendighet og kvalitet. Forretningsregler enten brann (det betingede elementet evaluerer til sant) for en bestemt transaksjon, eller de ikke gjør det. Dette kan (og skal) registreres hver gang en beslutning fattes og representerer en presis beskrivelse av hvordan en beslutning ble fattet. Dette støtter etterfølgende analyse og forbedring av beslutningstaking. A Business Rules Management System or equivalent functionality gives business users and analysts the ability to make routine changes and updates to critical business systems while freeing IT resources to concentrate on higher value-add projects and initiatives. Even when used by an IT organization in a more traditional way, a Business Rules Management System allows for more rapid change by making it easier to find, make and test changes to decision-making logic. Managing decision logic requires software that supports a range of activities: Integration with other applications and services and linking business rules to data sources so that business rules can be developed that will use the data available in existing systems and processes. The development and testing of business rules by both technical and non-technical users so that all those involved in defining a decision can participate in writing the business rules. Identification of rule conflicts, consistency problems, quality issues and more for both technical and non-technical users so that full advantage is taken of the declarative nature of business rules. Assessment of the business impact of changes to the business rules through simulation and reporting to ensure the right changes are being made and to understand the business consequences of changes that must be made. Deployment of a defined package of business rules to Decision Services in different computing environments. Measuring and reporting of decision and business rule effectiveness based on the results of executing business rules in decision services. Such a system requires the following capabilities. In a future release of the report a set of specific items to look for in each category will be identified. A business rule management environment suitable at least for technical users is essential. This environment typically also includes design tools to integrate the deployed business rules with the rest of the enterprise computing environment. Generally this is provided as part of an Integrated Development Environment or IDE, often one based on Eclipse or Visual Studio. Technical users are generally not the only ones who will need to edit business rules. Interfaces to allow business analysts and business users to manage business rules directly and in-context, or tools to allow such interfaces to be built and maintained, are critical elements of a robust approach to managing decision logic. These interfaces could be part of an IDE, though this is less common, and a thin-client interface is more likely. Some products provide editing environments for non-technical users based on the Microsoft Office products, specifically Microsoft Word and Microsoft Excel. A variety of metaphors are often used to author business rules. A rule flow or decision flow is used to lay out multiple steps within a decision. Business rules can be specified for each of the steps or tasks in such a flow as a decision tree, decision table, rule sheet, decision graph, decision model, rule family or simply as a list of independent rules. The differences between these metaphors and the value of each will be discussed in a future version of the report. Verification and validation tools that check business rules for completeness, consistency and logical errors help ensure that valid business rules are being written. Such tools should be suitable for both technical and business users to use and should be integrated with the various editing interfaces provided. These tools should ensure that the business rules being authored are at least potentially valid. They cannot tell if the business rules are the right ones for the business or if they handle every business scenario but they can tell if they are structurally and logically complete and that they handle known variations in data such as lists of values. Testing and test management tools that support unit, system and acceptance testing are a necessity. While there are circumstances in which business rules change so rapidly that formal testing is not part of the release cycle, most organizations will still have a set of tests they wish to run before allowing a new set of business rules to be deployed. Managing these tests should be straightforward. Business rules must sometimes be tested with other new components in the context of a broader application deployment and being able to test the business rules in this context is useful. Many products support integration with open test management standards such as xUnit. Technical users, and ideally less technical ones, should also be able to debug business rules. They should be able to walk through the business rules executing in a decision to see what happens in specific cases. This may be supported only for a local test environment or for both local and production environments. Impact analysis and business simulation tools to allow non-technical users to see the impact of a set of rule changes on their business outcomes are an increasingly important part of managing decision logic. Business analysts and business users will not generally be willing to make changes to business rules unless they can see what impact a change will have. Similarly when a change must be made to the business rules, due to a regulatory or policy change for instance, business users will want to see the likely impact of this change. The results must be presented in business terms to be useful an increase in profitability, a reduction in fraud, etc. These facilities may be provided as a batch tool for running historical or sample data through a set of business rules or as a more interactive tool allowing a business user to select the data they care about and running new or changed rules against that data. The best practice is clearly moving this closer to the editing of the rules themselves with the potential business impact of a change being shown automatically as a change is made in the editing tools. Decision logic must be integrated with the data that will be available when the business rules are deployed. It needs to provide tools that at least allow technical users to integrate the business rules with the organizations data. In addition it is useful for a product to be able to bring in large amounts of historical data as well as large test datasets to support effective testing and impact analysis. A set of deployment tools that support the deployment of a set of business rules either as executable code or as a package that can be executed by a high performance Business Rules Engine, ideally on multiple enterprise platforms, is required. One point of confusion is the difference between a Business Rule Engine and a Business Rules Management System. A Business Rule Engine can be part of a complete system for handling all the things involved in working with business rules. It is clearly an important part, but it deals only with execution. It determines which business rules need to be executed in what order. A Business Rules Management System is concerned with a lot more. Business rules can be executed in a number of different ways once deployed. Some Business Rules Management Systems support inferencing execution. Based on various algorithms, many derived from the original RETE algorithm these determine the correct execution sequence based on the structure of the business rules and the data available when they must be evaluated. As business rules fire and change data the engine reassesses which business rules might need to be fired next. While there are some scenarios that are very difficult or even impossible to handle without inferencing support, they are not common. The key advantages of inferencing in normal use are that it allows the business rules to be written in any order and that it ensures business rules are re-evaluated when the data used in their conditions changes. Business rules can also be executed in a sequential way, using the order specified for the business rules at design time. In many scenarios, especially when most business rules in a set will be executed for most transactions, this approach is faster. It also allows business rules to generate code, which can result in smaller and more portable deployments. Finally a number of products offer designed execution where the rules are executed sequentially but the order is determined by automated analysis of the business rules at deployment time. This simplifies execution but allows business rules to be written and edited in any order without any unexpected impacts on their behavior as the deployment time analysis will sequence the new and changed business rules appropriately. For most business scenarios all these approaches work well. Each approach has its own set of best practices in business rule writing. Last, but by no means least, products should offer an enterprise-class repository for storing and managing business rules. This repository may be a complete decision management repository that also stores predictive analytic models and optimization models. It is more likely to be one that only manages business rules. It should provide access control and security, audit trails for changes made to the business rules and versioning at a number of levels. An extensible repository that allows additional information to be added as well as an API for repository access can improve the integration of the product with other enterprise components. Some products provide integration with source code control systems, allowing business rules to be stored and managed alongside code used in the rest of the application. Embedding Predictive Analytics Embedding predictive analytics requires a software component for the creation, validation, management, deployment and ongoing re-building of predictive analytic models. Such a Predictive Analytics Workbench allows a data miner, data scientist, analytics professional or business analyst to explore historical data and use various mathematical techniques to identify and model potentially useful patterns in that data. For the purposes of this report we are not concerned with the use of data mining or predictive analytic workbenches for one-off research projects to answer a specific question or with the construction of statistical models per se. Only models that can be applied to a specific transaction or item to classify it or make a prediction about it are included. Other forms of data mining and predictive analytics can have tremendous value to an organization but they are not relevant to this discussion of Decision Management Systems. The predictive analytic models created can predict a binary outcome (yes or no), provide a number (often representing a probability or ranking of likelihood) or a selection from a list (of products for instance). They might also cluster or group based on likelihoods and may identify what item is associated with what other items. Data mining and predictive analytics allow organizations to turn historical data into useful, actionable analytic insight. Data mining and predictive analytic models are often grouped with business intelligence, reporting and visualization under the general term analytics. Data mining and predictive analytics differ from business intelligence capabilities in a number of ways: They are focused on extracting meaning about the likely future rather than summarizing or understanding the past they use historical data to make predictions about what is likely in the future. They are probabilistic rather than definitive in that they rarely if ever make a prediction that something concrete is definitely going to happen. Generally they say how likely something is, make a prediction with a certain degree of confidence, or rank order a set of possible outcomes from most to least likely. Rather than relying on the visual processing power of humans to see patterns in data, they rely on mathematical algorithms to explicitly extract these patterns from the data. This last point has an important consequence for predictive analytic workbench products being used to develop Decision Management Systems. These products must do more than simply define the right mathematical models. Presenting the results of a predictive analytic project as mathematics or even as visualizations and reports is not sufficient. It must be possible to use the product to both produce an effective predictive analytic model and embed such a model into an operational system. Unless the predictive analytic models produced can be effectively embedded they will not be useful for Decision Management Systems. A predictive analytic workbench needs to support a range of activities that are generally performed in a highly iterative way: Integration with a wide range of data sources so that data can be brought into a modeling environment for analysis. These data sources might be systems that are internal to the organization or external data. Increasingly these sources go beyond traditional relational data sources to unstructured and semi-structured data. Cleaning, integration, summarization and exploration of this data including sampling, identifying outliers, providing distribution statistics and more. The creation of an analytical dataset suitable for analysis including identifying and creating potentially useful derived variables, and managing very large datasets with thousands of attributes (both original and derived). Automated or mostly automated analysis of very large numbers of records using a variety of algorithms such as classification, decision trees, linear and logistic regression, clustering, neural networks, nearest neighbor and more. Increasingly the use of ensemble methods, where multiple techniques are applied in combination, must also be supported. Creation of analytic representations, models, based on this analysis such as predictive scorecards, functions or business rules. Validation of these models to prove they will be predictive with data not used to build them as well as assessment of their effectiveness in making predictions. Deployment of these models into an execution environment or as code that can be independently executed. The definition and management of repeatable processes or workflows to handle all these steps so that they can be repeated with new data, as part of assessing multiple possible approaches or with minor edits as the user evolves their approach. One of the most important facets of these kinds of workbenches is their support for an industrial scale process for building predictive analytic models. Predictive analytic model building used to be something of a cottage industry, with each modeler making their own choices for scripting language and a largely manual process. This approach relies heavily on the skills of the modeler and is hard to scale. With organizations increasingly needing dozens or hundreds of models, a more industrial process is called for. This does not eliminate the skill of a modeler, but it does require more repeatability, automation and scalability in the way predictive analytic models are built and managed. This is where a predictive analytic workbench is essential. A predictive analytics workbench gives data miners and possibly business analysts the ability to derive useful probabilities about the future from potentially large amounts of data about the past. These probabilities may group or segment customers or other records, identify the propensity of someone to do something (buy, churn, respond, visit), determine the strength of an association between two records or identify what is likely to be the best combination among many possible ones. Embedding predictive analytics requires the following capabilities. In a future release of the report a set of specific items to look for in each category will be identified. Predictive analytic models are typically built from a large amount of data, often pulled from multiple data sources. A predictive analytic workbench must be able to connect to and retrieve information from a variety of structured and unstructured data sources as well as flat files of various kinds. The data available is often not immediately suitable for the construction of predictive analytic models. A predictive analytic workbench provides a variety of tools to allow the clean up and integration of data prior to modeling. These tools include renaming and re-categorizing data fields, imputing missing values, filtering outliers, extracting samples and transforming data to make it more suitable for modeling. The end result of this data preparation work is what is often called an analytical dataset a large set of data attributes (some original, some derived) with any hierarchical structure flattened into a single list of attributes. Modeling efforts typically begin with exploration of the data available to develop some understanding of the data and of the patterns in that data. A rich set of visualization and graphical tools as well as statistical analysis routines help find the hidden patterns and relationships that might drive an effective model. These tools are often used in conjunction with the data preparation tools so that problems found in graphing the data, for instance, can be corrected in a data preparation routine. The same visualization and analysis tools will also be used to assess model outcomes once models have been developed. At the core of a predictive analytics workbench is a model creation environment suitable at least for data miners and other analytic users. The modeling environment might also allow business analysts to create and manage the modeling process typically through a combination of automation and simplified interfaces. Some predictive analytic workbenches are designed for expert users. Some are primarily aimed at these experts but provide simplified interfaces that aim at a broader audience. Some are designed with a single environment that works for both expert and less expert users. While the style of interface and its expectations can vary, all these workbenches create predictive analytic models and related resources in some form of shared repository. The modeling environment typically involves laying out a series of steps that will result in the construction of a model or models that can be evaluated for eperformance. Steps will include data preparation and analysis as well as the execution of one or more algorithms from an extensive set. Algorithms supported include clustering, association, linear and logistic regression, decision trees, support vector machines, Bayesian modeling and nearest neighbor techniques to name a few. It is increasingly common to find ensemble models where several techniques are applied, or one technique is applied with different parameters, and the results aggregated in some fashion to create a single, overall ensemble model. Some predictive analytic workbenches can take advantage of in-database modeling engines that can handle some of the data preparation tasks as well as execute the modeling algorithms themselves on the database server that contains the data being analyzed. This improves performance by eliminating the need to move data from the database to a separate analytic server and takes advantage of the increasingly powerful servers supporting data infrastructure. Regardless of which technique or set of techniques is used, model performance assessment and comparison tools are used to see how well a model performs. Different models can be compared and tools such as lift curves (comparing selection using a model to a random distribution) used to see how effective the model would be in production. These tools typically use new data (data that was not used to build the model) to see how predictive the model would be once deployed. Once the final model or models have been identified they must be deployed. A predictive analytic workbench may allow multiple approaches to deployment: Models can be used to score data in a batch mode, applying the results back to the database that contained the data from which the model is built. Some predictive analytic workbenches can act as a real-time scoring server using their own scoring engine and providing a web services or other API to allow it be called during decision-making. Scoring code can also be generated (as C or Java, as SQL or as business rules) so that it can be deployed to a Decision Service for real-time scoring. In-database scoring is also available, with the definition of the model being pushed to the analytic infrastructure where the scoring engine is running. A number of predictive analytic workbenches also allow models to be generated using the Predictive Model Markup Language (PMML), allowing the model to be executed by any business rules or scoring engine that supports this standard. Models are built from a snapshot of data. As such they age as time passes the data being fed into the deployed model may look less and less like the data from which it was built. A predictive analytic workbench needs tools to monitor deployed models to see how their performance is varying over time and to identify variations in performance or in data distributions. Many new models are initially deployed to challenge an existing model and the performance of both the original champion model and the new challenger model need to be compared to see if the challenger is good enough to replace the champion. Model monitoring tools need to identify opportunities to refresh and retrain models and to provide tools to make it easy for users to rebuild models to take advantage of new data. Some predictive analytic workbenches provide components for automated model tuning and updating. These machine learning techniques monitor the performance of a model as it is used in deployment and automatically adjust its underlying equation based on that performance. Some of these environments can start with no model and gradually build a predictive model based on the results of random experiments while others are designed to be used with pre-defined models. Model Tuning can be left to run forever or it can tune the model within defined boundaries and flag a model for re-building if its performance starts to drift outside those boundaries. Model Tuning capabilities are often deployed in a Decision Service if that is where the model is being executed. A predictive analytics workbench should offer an enterprise-class repository for storing and managing predictive analytic models. This repository may be a complete decision management repository that also stores business rules and optimization models. It should provide access control and security, audit trails for changes made to models and versioning. There is a growing category of software products that allow business rules to be specified and managed alongside predictive analytic models built in the same product. The degree to which large numbers of business rules can be managed and the range of predictive analytic models that can be built varies and such a combined product may not therefore support the complexity required for a specific Decision Management System. These products typically allow models built in other predictive analytic workbenches to be integrated also. In-database analytics can mean exactly thatanalytic capabilities embedded in a relational or columnar database. The phrase is also used to describe analytic capabilities embedded in data warehouse software, in data appliances and increasingly in Hadoop clusters. In-database analytic capability is delivered as a set of libraries, User Defined Functions, that deliver analytic or data mining functions such that they can: Access the data in the database, data warehouse, appliance or Hadoop file system in situ, without needing to extract it to some interim format. Directly use the memory, parallel processing capabilities and load balancingprocessor management of the data infrastructure. Be accessed both from specialist analytic tools (for model creation or data quality tasks for instance) and from operational systems. In-database analytic capabilities are specific to a particular database, data warehouse, data appliance or Hadoop distribution. Many vendors offer support for multiple data infrastructure platforms. Some capabilities are provided by the data infrastructure vendors, some by specialty analytic vendors, and some through partnerships between analytic and data infrastructure vendors. In-Database Analytic Capabilities For Decision Management Systems, the core capabilities to look for today in an in-database analytic product are: In-database data preparation and quality Data preparation, integration and cleaning often consumes 60-70 of the time and effort on an analytic project. In a traditional approach, data is extracted from the data infrastructure in which it is stored, processed through various preparation steps and then presented to the analytic modeling algorithms that need it. With in-database capabilities, however, these steps all execute in-database. This means the original data is not extracted from the database but is processed in situ. The resulting cleaned and transformed data may be stored in the data infrastructure or passed out to a predictive analytic workbench for further processing. The net is that data required for analytic modeling is transformed in-database In-database model development In-database model development allows predictive analytic models to be developed using algorithms embedded in the data infrastructure. These algorithms access tables and views directly to get the data they need, process the data using the data infrastructures processing capabilities, and create a predictive analytic model. This model may be stored in the data infrastructure for in-database scoring or it may be passed out for use elsewhere. These capabilities may be integrated with an external predictive analytic workbench.. In-database model deployment and scoring In-database model deployment and scoring infrastructure takes models developed using some combination of in-database modeling infrastructure and a predictive analytic workbench and executes them in an operational datastore so they are available to operational systems accessing that datastore. This generally involves turning models into UDFs or stored procedures that can be called using SQL and that take database fields as input. In the future, more extensive support for analytic model management and for wrapping analytics in business rules for in-database decision-making will become increasingly important. The ROI of in-database analytics As with any product, a return on investment can come from increased revenue or decreased costs. Predictive analytics often add top-line revenue by boosting sales or driving fraud out. These kinds of returns are due to the use of predictive analytics in general rather than the use of in-database analytics specifically. Nevertheless in-database analytics offer an ROI both by increasing value (though speed to market, improved accuracy and increased accessibility) and by decreasing costs. Speed to MarketThe key to deriving ROI from in-database analytics is a dramatic increase in speed to market. Using in-database analytics can result in a 10-100x overall reduction in time from when a team starts to when decisions are being made more analytically in a decision management system. Improved AccuracyPredictive analytic models developed in-database might be more accurate than those developed more traditionally. Increased AccessibilityIt is likely that the resulting analytics will be more accessible and so more likely to be used in more places, increasing their reach. Lower costPrimarily from less hardware and improved utilization. These are all benefits of in-database analytic technology widely available today. Increasingly integrated model management allows for easier monitoring and managing of deployed models, adding further value. Longer term the possible deployment of a complete decisionbusiness rules and predictive analytic modelsin the database will increase this value significantly by making analytic decision-making pervasive throughout the data infrastructure. Additional material on in-database analytics: Download the In-Database Analytics Thought Leadership Paper sponsored by SAS here . Standards play a central role in creating an ecosystem that supports current and future needs for broad, real-time use of predictive analytics in an era of Big Data. There is a move to real-time scoring, calculating the value of predictive analytic models when they are needed rather than looking for them in a database. At the same time the variety of model execution platforms has expanded with in-database execution, columnar and in-memory databases as well as MapReduce-based execution becoming increasingly common. Modeling too has changed with the open source analytic modeling language R becoming extremely popular. The range of data types being used in models has expanded along with the approaches used for storage. This increasingly complex and multi-vendor environment has increased the value of standards, both published standards and open source standards. The explosion of interest in predictive analytics has put a premium on standard approaches that will allow the ecosystem to expand to meet demand, especially R. Big Data, driven by increased digitization and the Internet, is commonly described as 8220the 3 Vs8221 of Volume, Variety and Velocity. Open source technology has evolved to meet this demand by providing a collection of highly scalable approaches to storing and managing data under the Hadoop label. The role of this open source stack in the predictive analytics market is evolving rapidly as the need to bring this data into the predictive analytics mainstream has grown. New technologies for data storage combined with this growth of Hadoop have put a premium on approaches that allow predictive analytics to be built and executed in a wide variety of platforms, increasing the interest in PMMLthe Predictive Model Markup Language. These three standardsR, PMML and Hadoopare increasingly important in predictive analytics. R is fundamentally an interpreted language for statistical computing and for the graphical display of results associated with these statistics. Highly extensible, it is available as free and open source software. The core environment provides standard programming capabilities as well as specialized capabilities for data ingestion, data handling, mathematical analysis and visualization. The core contains support for linear and generalized linear models, nonlinear regression, time series, clustering, smoothing and more. The biggest opportunity for R is the number of people using it. It is widely used in academic programs and in not for profit and government projects. As more professionals see analytics in their future, R is also appealing as a tool to learn with. R usage has risen steadily in the Rexer Analytic Survey every year since the survey first started asking about it. In 2013 70 of respondents now report using it while 24 say it is their primary tool. In addition the number of R algorithms available is huge with over 5,300 packages that extend R in some wayit is hard to imagine an algorithm that is not available for R. R is an open source project, however, and many companies will need commercial support and training services to succeed. In addition parallelism, scalability and performance are an issue, particularly of the base algorithms. Commercial vendors are mitigating this by providing their own implementations. Tooling is also an issue with the basic R environment being script-based. Finally deployment into production is technically complex if using only the base product. Organizations should make R part of their predictive analytics adoption and roll out strategy. Even for organizations already committed to a commercial platform it makes sense to take advantage of R at some level and organizations should explore their platforms support for integrating R. Plan on working with a commercial vendor that has a solid plan for R in terms of providing scalable implementations of the algorithms and either a better development environment or integration with graphical modeling tools. Hadoop consists of two core elementsthe Hadoop Distributed file System or HDFS and the MapReduce programming framework. An open source project, Hadoop development started in 2004 inspired by earlier work at Google, and became an official top-level Apache project in 2008. HDFS is a highly fault tolerant distributed file system that runs on low-cost commodity hardware, allowing very large amounts of data to be stored cheaply. MapReduce is a programming framework that breaks large data processing problems into pieces so they can be executed in parallel on lots of machines close to the data that they need to process. Hadoop provides a distributed, robust, fault tolerant data storage and manipulation environment that is well suited to the challenges of Big Data. The use of commodity hardware allows it to scale at low cost while the ability to apply the schema of data only when it is being read means Hadoop is very flexible for a wide variety of data types. Storage and processing are streaming-centric and this enables the environment to handle fast moving data. Hadoop is, however, a programmer-centric environment and there is no support for SQL in the base environment. Hadoop structures are better at batch processing than they are at interactive systems and Hadoop lacks any specific data mining or predictive analytics support. Hadoop has a lot of potential for companies adopting predictive analytics but it must be applied in context. Beginning with a business problema decision that must be madedetermines the analytics that will be required and thus what kind of data will be required. This creates a use case for Hadoop by identifying a business problem that requires data not already available in existing infrastructure. Organizations that lack familiarity with open source should consider one of the commercial organizations that support Hadoop. Once Hadoop becomes part of the data infrastructure for an organization it is important that it is supported by the rest of their decision management infrastructure. PMML is an XML standard for the interchange of predictive analytic models developed by the Data Mining Group. The basic structure is an XML format document that contains data dictionary. data transformations and models. PMML started in 1998 with 0.7, moving to a 1.0 release in 1999. Since then the standard has seen multiple releases with 4.1 being the most recent (in 2011). The 4.x releases marked a major milestone with support for pre - and post-processing, time series, explanations and ensembles. PMML offers an open, standards-based approach to operationalizing predictive analytics. Support for PMML is increasingly broad-based with analytic tools, databases, data warehouses and server deployments. Business rules and other development environments also increasingly support it. The primary challenge for PMML, as it is for any standard, is to get the vendor community to regard support for it as more than just a 8220check the box8221 capability. Standards such as PMML also struggle to get vendors to stay current and support the latest release. For PMML this is particularly an issue for the support in PMML 4.x of pre - and post-processing. Finally not everything that can be done in predictive analytic tools can be generated into PMML. All organizations approaching predictive analytics should include PMML in their list of requirements for products. Selecting analytic tools that do a good job of generating and consuming PMML and identifying operational platforms that can consume and execute PMML just makes sense. While organizations committed to a single vendor stack may be able to avoid this requirement, even there the ability to bring models developed by a consortium or third party into that environment may well prove critical while partners may need to execute models but not share the same vendor stack. Future Standards There are also some future developments that are worth consideringthe emergence of the Decision Model and Notation standard, growing acceptance of Hadoop 2 and planned updates to PMML. The Object Management Group recently accepted a new standard, the Decision Model and Notation standard. DMN as it is known is now finalized. DMN provides a common modeling notation, understandable by both business and technical users that allows decision-making approaches to be precisely defined. Hadoop 2.x (technically Apache Hadoop 2.2.0) was released in October of 2013. It8217s considered a future development because most Hadoop users are not using it yet. Hadoop 2.x is all about really all about YARNa resource management system that manages load across Hadoop nodes that allows other approaches besides MapReduce to be used. PMML Release 4.2 is expected to be released in the first half of 2014. As with 4.1, release 4.2 is expected to improve support for post-processing, model types and model elements. 4.2 is particularly focused on improving support for predictive scorecards (especially those with complex partial scores), adding regular expressions as built in functions, and continuing to expand support for different types such as continuous input fields in Nave Bayes Models. Additional material on standards in predictive analytics: Download the Standards in Predictive Analytics Thought Leadership Paper sponsored by The Data Mining Group, Revolution Analytics (now Microsoft) and Zementis here . Optimization and Simulation An optimization suite is an environment for defining and solving mathematical models and for simulating the differences between multiple similar mathematical models. An optimization suite allows a modeler or business analyst to define a business objective and a set of constraints and then solve this problem to see how best to run the business. Optimization suites support what is sometimes called Operations Research or Management Science. There are really three uses of optimization in the context of Decision Management Systems: When a decision has a potentially complex answer that involves multiple elements it may be effective to optimize the selection of these elements. When a decision answer is a single element then it may be useful to optimize across many decisions to allocate the available answers to each specific decision most effectively. When reviewing possible decision-making strategies as part of decision analysis it may be possible to use optimization to tune or select between these strategies. Optimization allows organizations to either find a feasible solution to a heavily constrained problem or to maximize the value gained from a constrained set of resources by finding the most profitable, quickest or cheapest combination of resources that are allowed. Optimization differs from both business rules and predictive analytics in a number of ways: Business rules are absolute where optimization need not be. For instance business rules allow an offer to be made to someone only if certain conditions are true where an optimization model might allocate offers based on where they will be most effective. Optimization can be effective when business rules are numerous and potentially contradictory as it allows for trade-offs between values where business rules require defined sets of conditions. An analytic model is created through analysis of historical data while an optimization model is built explicitly from business know-how and historical data may be used to see how the model would have worked in the past (though this is not necessary). Because predictive analytic models are built and executed separately they are often very quick to execute. Optimization models in contrast must be solved each time they are used and this can require significant time and resources. An optimization suite needs to support a range of activities: Defining a constrained optimization problem as a mathematical model using variables, an objective function and constraints both hard and soft. montior Solving this problem, often multiple times as elements of the problem are changed and re-assessed. Integration with a wide range of data sources so that data can be brought in and run through a defined optimization model. These data sources might be systems that are internal to the organization or external data. Simulation and comparison of different scenarios by a non-technical user to see what the best choice is likely to be going forward. An optimization suite gives modelers and possibly business analysts the ability to manage tradeoffs and constraints to find the optimal action to take. An optimization suite requires the following elements: At the core of defining an optimization model is a modeling language of languages. Some optimization suites have their own such language but a number of popular ones exist and some solvers (see below) can support several languages. Most optimization suites will provide an optimization model development environment suitable for modelers to specify models in one of more of these languages. This environment may be based on a commercial available IDE such as Eclipse or Visual Studio. Debugging and profiling tools allow modelers to review and change the model to correct for identified problems find conflicts, relax constraints or profile performance. Models can be complex and even unsolvable so profiling and debugging tools are essential to allow a viable model to be defined. Most optimization suites include multiple engines or solvers that apply mathematical techniques to the developed models to solve the problems defined in those models. These solvers can be specific to different kinds of problems such as linear programming problems, mixed integer problems, quadratic-problems and combinations such as mixed integer quadratic problems. These solvers may be used to run scenarios, to find optimal actions that can be loaded into a production system as a batch or can execute in a Decision Service to solve an optimization problem as part of a single decision. In addition, many standalone solvers are available. Optimization models are coded or constructed by hand but scenarios typically involve a large amount of data, often pulled from multiple data sources. An optimization suite must be able to connect to and retrieve information from a variety of structured and unstructured data sources as well as flat files of various kinds and present this data for scenario analysis. Many optimization problems require an interface that allows a business analyst or business user to run and compare scenarios based on these models and associated data. Such scenario analysis involves rich visualization and the ability to bring real world historical data into the system to run through the model. Optimization suites include either scenario analysis interfaces or the ability to rapidly generate such interfaces for a given model. The results of optimization can be deployed in a number of different ways. Deployment tools in an optimization suite may support the deployment of a model as results or recommendations, the packaging of a model to run against a solver running in another environment at run time or the conversion of optimal actions into rules that mimic the assignment of an optimal action. An optimization suite should offer an enterprise-class repository for storing and managing optimization models and associated scenarios. This repository may be a complete decision management repository that also stores business rules and predictive analytic models. It should provide access control and security, audit trails for changes made to models and versioning. Monitoring Decisions The final area of capability is that of monitoring and improving decisions over time. These capabilities are essential for Decision Management Systems both because decisions are high change components and because the time it takes a decision to come to fruition can be extensive, making it hard to tell good ones from bad ones. There are many drivers of change in decision making. Regulations change so organizations must change how they make eligibility decisions to remain compliant with those regulations. Policies change so organizations must, for instance, change their validation of suppliers to track new data requirements. Competitors change so organizations that wish to remain competitive must change their discounts or pricing. Markets, such as the financial or credit markets, change so organizations must constantly change the way they assess risk. Consumer behavior changes regularly and continually so organizations working with consumers must constantly address these changes in their decision-making. Finally, of course, fraudsters adapt and seek new loopholes to exploit so organizations must change how they detect and process fraud to focus on new fraud as it develops. In addition to outside changes that explicitly drive changes to decision-making, organizations want to continuously improve their decision-making. The challenge for some decisions is the time it takes for decisions to play out it may be weeks or months before an organization knows if the decision was a profitable one for instance. To continuously improve in these circumstances it is essential to be able to conduct experiments and compare their results. Such an experiment makes the same decision in two or more different ways, applying the different approaches to different transactions and comparing the results. Sometimes called adaptive control, champion-challenger or AB testing, these approaches drive continuous improvement in decision making. As shown in Figure 4: Continuous improvement in decision making below, this approach requires that the results of a decision be evaluated, predictive analytic models and business rules updated and refined and new challengers or alternatives developed. These are fed back into the decision-making loop and used to make future decisions. The results of these decisions are evaluated in turn with successful experiments being adopted, unsuccessful ones dropped and new ones developed in a continuing cycle. Figure 4: Continuous improvement in decision making The capabilities to support monitoring and improving of decisions are not typically found in a single software product. Instead these capabilities drive the requirements for products used for both decision logic management and embedding predictive analytics. The primary capability required for decision monitoring is that of logging decision execution. When a decision is made by the Decision Management System it must be possible to log how that decision was made, what business rules fired. This log should include any predictive analytic model scores calculated during the decision as well as the specific action recommended by the Decision Management System. In addition, these decision-making logs should be stored in a way that allows them to be integrated with information about the response of customers and others to the decision did the customer accept the offer, did the salesperson override the price with an additional discount, was the deal closed and so on. The long time results such as orders placed or customers retained that can be attributed to these responses are logged by other systems. It should be possible also to tie the decision-making specifics to these results. While logging is essential for ongoing improvement of decision making, logging also supports compliance and audit needs by providing complete execution transparency. When an audit or compliance review is conducted it will be possible to tell exactly how a decision was made and whether or not that decision followed the correct guidelines. To ensure continuous improvement of decisions it will often be necessary to conduct experiments. These experiments typically involve multiple approaches to either the decision logic of the decision, the predictive analytic models used in the decision or both. Additional decision logic must be managed to determine which of the approaches should be applied to a specific customer or transaction and it must be possible to record this as part of the decision itself. All products suitable for managing decision logic can manage experiments in this way. Some products for decision logic management have additional capabilities built in to make it easy to manage, review and compare the various approaches being used within a decision. The performance of a decision can and should be managed and monitored in the same way any other aspect of business performance is managed and monitored. Generally it is straightforward to apply the standard performance management capabilities of an organization to decision logs to see trends, hotspots, etc. While changes to decisions and decision logic are sometimes extensive, requiring all the capabilities described above, sometimes more localized and focused changes are required. These should generally be made by business users so that a full IT cycle can be avoided for what could be regular, minor updates. To make this work it must be possible to use the decision logic management capabilities for non technical users to present a business person with their own business rules, in context. Ideally this environment will only allow them to make changes that make sense and will present no unnecessary information. Most products for managing decision logic either include suitable interfaces or allow suitable interfaces to be developed. As noted above it is important to provide impact analysis tools to allow non-technical people to rapidly see the business impact of any changes they make. This should cover both design impact and execution impact and involves more business-centric functionality than is required for testing. Impact analysis tools may need to consider changes to decision logic, to predictive analytic models or to both. See above under Overall Architecture . When multiple decision making approaches are being used in parallel it will be essential that the effectiveness of these alternatives can be assessed. Capabilities such as swapset analysis (showing which customers, for instance, would get offer B rather than offer A) as well as more general comparison of business performance metrics are critical. In addition, simulation and what-if analysis tools that can use each alternative approach and compare the outcomes of multiple simulations based on the approaches will be required. Key Characteristics Experience in working with organizations that are developing Decision Management Systems shows that while there are many ways to develop them effectively, certain key characteristics come up repeatedly as critically important. These characteristics fall into a number of areas including the completeness of the platform, engagement of business users, architectural flexibility, organizational scale and decision monitoring. This set of characteristics is neither a definition of a complete set of features and functions required to build a Decision Management System nor a complete list of characteristics for any of the product categories. It is intended as a set of characteristics you can look for in products you are purchasing or using that will support a focus on Decision Management Systems. A small number of vendors offer a complete platform for building Decision Management Systems. These platforms handle decision logic or business rules, support data mining and predictive analytic modeling, include constraint-based optimization and provide monitoring and integration capabilities for deployed systems. While it is not necessary to buy a complete platform from a single vendor, it is valuable for products to see themselves as part of a broader ecosystem. For instance Business Rules Management Systems that are aware of predictive analytics and offer integration with such systems and predictive analytic workbenches that offer business rules-friendly deployment options are more suitable for Decision Management Systems than more narrowly focused products. Complete Platform A complete platform is an integrated set of offerings that allow for the management of decision logic, the building and deployment of predictive analytic models and the mathematical optimization of decisions. These offerings are either a single product or a product set with a common user interface, shared repository and common tooling that operates across the products. Support for decision monitoring and analysis is either provided or the data is made available to standard reporting and dashboard components. Complete Ecosystem A company may not offer a complete platform for Decision Management Systems themselves while still supporting a complete platform through their ecosystem. By supporting open standards such as PMML and by partnering with other vendors that offer more pieces of the puzzle, vendors can offer a Complete Platform Ecosystem. Some companies are not focused on Decision Management Systems but on providing a specific component. They may be focused only on managing decision logic, building predictive analytic models or constraint based optimization. They may not even think of themselves as participants in the development of Decision Management Systems. These companies are not likely to have a complete platform nor are they likely to actively partner to develop a complete platform ecosystem. Their products can still be easy to integrate and use alongside other products and can support standards such as PMML for predictive analytic models or JSR-3311 for constraint-based optimization. For standalone products focused on a specific technology market this kind of openness is critical in being part of a complete platform for Decision Management Systems. The agility and adaptability of Decision Management Systems crucially relies on the engagement of business users. The extent to which products being used to build these systems can bring business users into the development team is therefore critical. Products that focus on allowing business users to read and write decision logic, participate actively in building or reviewing analytic models and allow non-technical users to run through scenarios are more likely to be successful that those focused only on technical developers. Business User Analytic Modeling Analytic tools can engage business users by providing an environment designed for non-technical users to create and use analytic models. This might be a complete environment that uses automation and machine learning algorithms to build predictive analytic models with minimal data mining expertise required. Such use of machine learning algorithms and automation should be complemented by user interfaces, reporting and automated checks designed to support a less knowledgeable user. These additional capabilities can ensure that problems such as overfitting are avoided and that test and validation data is automatically set aside, for instance. Alternatively a product might be a business user friendly environment layered on top of a more typical data miningpredictive analytic workbench. This would use wizards and other simplifying features to make it possible for non technical users to do data mining and create predictive analytic models. Generally these environments integrate with the more traditional modeling environment and store models in the same repository such that modeling specialists can refine or enhance the models built using these less technical interfaces. Business User Rule Management The management of decision logic by non-technical users, non programmers, is a key element in delivering the agility required of a Decision Management System. While any product focused on managing business rules or decision logic might be said to allow some business user rule management, true business user rule management requires a number of elements. First the rules themselves must be approachable. Using declarative statements in place of procedural code so that each rule can be considered and edited independently as well as the use of a business user vocabulary not technical data element names in rules will ensure readability and clarity. Supporting a verbose, readable syntax in near natural language rather than terse programmer-centric constructs helps as does emphasizing graphical editing of rules in decision tables, rule sheets, decision trees, decision graphs and decision flows so that as little as possible has to be written out longhand. Because business users are not programmers, testing tools need to be accessible to them and excellent completeness and correctness checks are essential. Ideally these tests are performed inline, as business rules are edited, to ensure that obvious mistakes and omissions are avoided. Business users do not like technical environments designed for programmers so support for the editing of business rules outside these environments through a web interface or a point and click, business friendly editing environment makes rules more accessible. Such editing environments might include support for editing using standard Microsoft Office products also. Ideally these editors will be embeddable or mashable so that rule management can be embedded in environments focused on a specific business task rather than on rule editing. Support for a learning curve Most organizations will not be able to jump straight to either business user analytic modeling or business user rule management. They will need to bring business analysts on board first, exposing some of the tasks previously performed by IT or analytic teams to these semi-technical users. Over time the role of business analysts can be expanded and true business users brought in to work on certain elements. Products that provide way stations and gradual increases in complexity through multiple editing environments will be easier to adopt than those with a more limited set of options. For example, a product that allows users to bring analytics to bear incrementally rather than all at once will improve analytic adoption. If simple interfaces allow access to candidate association rules and proposed splits in decision trees then users will become gradually accustomed to the power of analytics to improve their decision logic. Over time they can be exposed to completely data-driven decision trees, unsupervised clustering and ultimately more complex predictive analytic models. Similarly a variety of rule editors that allow a user only to change numeric values in a locked-down rule can get users used to the idea that they can change decision logic. Over time editors that allow new rules to be built based on templates and perhaps new rules to be built using a point and click editor can bring users up the learning curve gradually. Impact Analysis One of the biggest barriers to business users taking ownership of their decision logic, in fact probably the biggest single one, is an inability to see what the impact of a change will be. Products that provide strong impact analysis tools, especially tools that allow a non technical user to see how the change they are considering will impact their business results, will be more able to drive successful business user engagement. Impact analysis can be done using functions designed for regression or performance testing. The ability to do impact analysis and simulation using real data in a business friendly way is invaluable, however. This involves simple to use interfaces for loading data, an ability to assign business value to different outcomes and generally the ability to get results into Excel for further analysis. Ideally the environment should continuously perform impact analysis as changes are being made so that all edits are made in the context of their business impact. Impact on application context Simulating the impact of a change on the application or process context in which the decision is being made is sometimes critical. For instance, the impact of a change to fraud detection decisions may be best considered in terms of the workload create for fraud investigators and the average handle time for fraud cases. These measures are not measures of the decision performance alone but include process application design issues. An ability to determine the broader business impact of specific changes made to decision-making is a potentially very powerful capability. It may seem that this requires the application or process context and decision-making technologies to share a vendor. Certainly this capability is easier to provide in those circumstances but most organizations will ultimately find themselves in a more heterogeneous environment as noted above, so more flexible capabilities that allow decision simulation to be integrated with simulation capabilities from other tools should be valued as well as more packaged capabilities. Decision Management Systems automate decisions that must often be used in multiple channels where these channels may be supported by different applications and architectural approaches. Decisions may need to be made as part of business processes, in response to events detected or in support of legacy environments. A degree of architectural flexibility is therefore very useful in products used to develop Decision Management Systems. Support for multiple platforms and deployment styles as well as a wide range of integration options helps a lot. Cloud ready The recent growth of the cloud as a platform for enterprise applications means that more organizations are increasingly relying on cloud-based solutions for CRM, HR and other applications. Because Decision Management Systems must integrate with these systems it is becoming increasingly important for products used to build Decision Management Systems to be cloud-ready. This means being able to connect to cloud-based systems to access data and being deployable to the cloud so that decision services can be easily integrated into cloud-based systems. The cloud also has a lot to offer for the development of Decision Management Systems. Many tasks, such as building predictive analytic models and running simulations or impact analysis, require a great deal of computing power. Being able to push analytic modeling tasks, simulation runs and impact analysis execution onto cloud-based resources means these can be run in background while the user works on something else using their personal computer and can greatly increase the scope of what is possible in these tasks. For products being evaluated for Decision Management System construction the ability to integrate cloud resources for these high compute power tasks offers great productivity increases. Heterogeneous environment Most organizations of any size have a heterogeneous environment with multiple operating systems, multiple databases, different communication protocols etc. Different channels have different systems, mobile devices and in-store or kioskATM machinery is unique and organizations often have layers of computing equipment of different ages. No organization ever has a single, coherent architecture across all its systems, at least not for long. Because decision-making components must often support multiple channels and be consistent across multiple systems, products for Decision Management Systems should have multiple deployment options and be easy to deploy and integrate with these different operational environments. Organizations are often heterogeneous in another way. Some organizations use multiple business rules management systems, many use multiple predictive analytic workbenches as analytic modelers choose their own or use a tool to get access to a specific algorithm. Tools that recognize they must operate in this environment will generally be preferred therefore, especially in the ongoing evolution and management of decision management systems e. g. in analytic model management. Embeddable management and control components Decision Management Systems do not stand alone. In particular the management and control of Decision Management Systems should be easy to integrate into other management and control interfaces. For instance it should be easy to integrate analytic model management reporting with more general business performance reporting and business rule management components should be embeddable in other interfaces. A key criteria then for products used to build Decision Management Systems should be how easy is it to embed management components into portals and dashboards built using other tools, feed analytic model management data or rule performance data into a regular performance management environment and so on. There is tremendous interest in 8220Big Data8221 at the moment as the rapid growth of social media, weblog data, sensor data and other less traditional datasources creates new challenges for managing this information. While much of this interest has been around supporting queries and reporting, organizations are beginning to use these new data sources in their Decision Management Systems. Products that can support both traditional and newer Big Data sources therefore offer increased scope for organizations going forward. Support for Big Data involves being able to bring potentially very large amounts of data stored in NoSQL systems such as Hadoop into analytical modeling as well as into the operational environment. As many of these sources are less structured it also involves supporting text analytics and operations that use text operators. Flexibility in data definition, so that the variety and velocity of these data sources do not disrupt operations will also make a big difference. Big Data is often described in terms of an increase in volume, an increase in velocity and an increase in variety: More data, of more types, arriving more quickly. This increase in Big Data volume, variety and velocity has clear implications for Decision Management Systems. See the section on Big Data for more details. Most products used for developing systems are unconcerned with the operation of those systems once they are deployed. Products used to develop Decision Management Systems, in contrast, offer much more value if they are able to support the ongoing monitoring and improvement of the decision-making embedded in those systems. Products that provide analysis and other tools that integrate with deployed systems are particularly useful in this regard. Decision Performance Measuring overall decision performance by tying decision outcomes and decision-making approaches to business results is an important aspect of Decision Management Systems. In practical terms this means be able to easily log the decisions made, including those made as part of AB or ChampionChallenger tests, so that they can be integrated with overall business performance data in a reporting environment. Products that allow this kind of recording to be done automatically or with flags and settings rather than code are preferred as they create a lower maintenance overhead and are more likely to stay up to date as time passes and the decision making in the system evolves. Model Performance Predictive analytic models are generally built at a point in time and so their performance, in terms of how predictive they are, degrades over time. Predictive analytic workbenches that provide automated facilities for monitoring model performance, for identifying models whose performance is degrading, are to be preferred over those that require a development team to hand code this kind of model performance monitoring. In addition it is often helpful if model performance monitoring tools can support models built in multiple environments as this is a common situation. Rule Execution One of the most important ways in which decisions can be monitored is through logging the rule execution involved in the decision. While this kind of logging can be hand coded into almost any system, a tool that allows this to be turned on and off for different parts of the decision, that handles this automatically as a background task and that supports it without a significant performance impact is highly desirable. Logs that can be easily stored in database tables and used for reporting and logs that can be easily converted into or viewed in their more verbose format (using actual rule names for instance rather than ids) are also more useful. As with any technology, performance and scalability should be considered as part of a product selection. Most of the products listed in the appendix are scalable and perform well enough for most if not all scenarios. Organizations with specific and very challenging performance and scalability requirements should be sure to consider these explicitly. For most organizations it is enough to look for solid scalability and for performance adequate to support real-time decisioning. Scales up and out In general it is more important to consider if a tool scales well than to assess its particular performance on a given piece of hardware. If a product scales up and out well then more or more powerful hardware can be bought and used effectively as demands increase. If a product does not scale then, even if its initial performance is superior, an organization runs the risk that future demands cannot be met. Products that support multi-core processors, in-memory processing and distributed processing will scale better than those that do not. This is especially important in high compute power functions such as analytic modeling, optimization, simulation and impact analysis. There is a general move from batch to real-time decision making in organizations of all sizes and types. Many initial Decision Management Systems, however, are batch oriented or have demands that are not truly real-time with several seconds allowed for responses. Over time most organizations should expect to see more demand for real-time decision making as well increasing needs to support streamingevent-based systems. Products that have the kind of low-latency, time based capabilities these solutions need and therefore to be preferred over those that do not. Organizations adopting Decision Management Systems generally start with only a single project or two. Over time they become aware of the ROI of Decision Management Systems and the potential for them to change how their organization, its systems and business processes operate. At this point they begin to scale up their plans for Decision Management Systems. Most organizations do not wish to replace the tools with which they are familiar with new tools during this expansion. As a result products with characteristics that support organizational scale will be usable longer. In particular products that support industrialized analytics and enterprise rule management will scale to organization-wide use. Industrialized analytics When organizations first adopt predictive analytic models they generally only build one or two models. These models are often hand crafted by an analytic practitioner and then deployed by hand into an operational environment through batch updates or manual re-coding of the model. As the use of predictive analytics expands, however, hundreds or even thousands of analytic models may be required by the organization. These models must also be monitored and regularly updated if they are to maintain their level of predictability. Given that most organizations cannot simply recruit many more analytic professionals, a more scalable process is required. An industrialized analytic process emphasizes the use of automation in model construction, both to prepare and analyze data sources and to perform some or all of the modeling itself. It focuses on rapid deployment of models to real-time operational environments and monitors these models automatically to identify when they need to be re-built. Analytic professionals are engaged to handle difficult problems, to check on models that show problems or otherwise to supervise and manage a largely automated production line for analytic models. Supporting this environment requires analytic tools that emphasize scale and automation not just model precision. Enterprise rule management For decision logic the problem is slightly different. Reviewing business rules, comparing them to new regulations or policies and making appropriate changes are still manual activities, even when scaling business rules to the whole organization. The challenges come in being able to find the business rules that matter, ensuring the business rules that should be reused are reused, and in handling governance and security policies. When there are many rules that are owned by different groups and when reuse means that no one organization handles all the business rules in a decision, enterprise-scale management capabilities become essential. A product that allows federated storage of business rules in multiple repositories, that provides robust integration options with other repositories such as those for services and business processes, and that supports a variety of repository structures will be better able to scale. Similarly support for approval workflows, integrated security and good user management capabilities will be important. Best Practices in Decision Management Systems There are four key principles of Decision Management Systems: Begin with the decision in mind Be transparent and agile Be predictive not reactive Test, learn and continually improve Within each of these principles it is possible to identify a number of specific best practices in analysis and design, in development, in deployment and in operation. Begin with the decision in mind Decision Management Systems are built around a central and ongoing focus on automating decisions, particularly operational and micro decisions. Developing Decision Management Systems with a focus only on business processes, only on events or only on data is not effective. Understanding the business process or event context for a decision is helpful but the development of Decision Management Systems requires a focus on decisions as a central component of enterprise architecture. Focusing on operational or transactional decisionsthose that affect a single customer or single transactionis a significant shift for most organizations and requires a conscious effort. In particular where the operational decision in question is what is known as a micro decision, one that focuses on a how to treat a single customer uniquely rather than as part of a large group, organizations must learn to focus on decision-making at a more granular level than previously. It is also worth nothing that this focus on decisions must come first, before a focus on business rules or predictive analytic models. When it comes to developing Decision Management Systems, the right business rules and most effective predictive analytic models can only be developed if there is a clear decision focus. While the most basic best practice is encapsulated in this principlebegin with the decision in mindthere are some more specific best practices that should be followed. Decisions as peers for Process One of the most important aspects of building Decision Management Systems is to ensure that decisions start being treated as peers to business processes. Many organizations that are being successful with SOA and that are successfully adopting new and more advanced development technologies and approaches have done so using a business process focus. A focus on the end to end business process, not on organizational or system silos, and the tying of these business processes to real business outcomes represents a significant improvement in how information technology is applied to running an organization. To move forward with Decision Management Systems, however, it is necessary to do more than regard decisions as just part of a business process. Our work with clients as well as the evaluation of results from multiple companies shows that organizations that can manage decisions as peers to business processes do better. While it is true that decisions must be made to complete most business processes, simply encapsulating the decisions within the business process is not enough. Decisions are true peers for processes. Decisions are often re-used between processes and how a decision is made has a material difference on how the process executes. Failing to identify decisions explicitly can result in decision-making logic being left in business processes making them more complex and harder to change. Identifying high level decisions at the same time as you identify high level processes allows your understanding of both to evolve in parallel, keeping each focused and simpler. Link Decisions to business outcomes and results Your business can be thought of as a sequence of decisions over time. Organizations make strategic decisions, tactical decisions and operational decisions but each decision, each choice, affects the trajectory of the business. In fact, given that each choice you make about products, suppliers, customers, facilities, employees and more is a decision it is clear that decisions are the primary way in which you have a impact on the success or failure of your business. If there is no decision to make then there is no way for the organization to affect its destiny. One of the first steps, then, in understanding your decisions so that they can drive the development of effective Decision Management Systems, is linking them to business outcomes and results. For each decision you identify it is important to understand what key performance indicators, objectives, or business performance targets are impacted by the decision. Understanding that a particular decision has an impact on a particular measure and understanding the set of decisions that impact a measure has two important consequences. First it enables you to tell the difference between good decisions and bad decisions. A good decision will tend to move the indicators to which it is linked in a positive direction, a bad one will not. Second it enables you to see how you can correct when a measure gets outside acceptable bounds or moves in a poor direction. Understanding which decisions could be made differently gives you an immediate context for solving performance problems. Building links between decisions you identify and your performance management framework is important as you identify and design your decisions. It is also important to use this information to present options and alternatives to those who are tracking the objectives in a performance management context. Understand decision structure before beginning Identifying decisions early, considering them as peers to processes and mapping them to your business performance management environment are all great ways to begin with the decision in mind. Before you start developing a Decision Management System, however, you should understand the structure of your decisions. The most effective way we have found to do this while working with clients is to decompose decisions to show their dependencies. Decisions are generally dependent on information, on know-how or analytic insight, and on other (typically more fine grained) decisions. Having identified the immediate dependencies of a decision you can the evaluate each of the decisions you identified and determine their dependencies in an iterative fashion. The dependency hierarchy you develop will actually become a network as decisions are reused when multiple decisions have a dependency on a common sub-decision. This network reveals opportunity for reuse, shows what information is used where and identifies all the potential sources of know-how for your decision making whether regulations, policies, analytic insight or best practices. For more on this approach please see the author8217s book Decision Management Systems and Alan Fish8217s Knowledge Automation in the list of works cited. Use a standards-based decision modeling technique Building a Decision Requirements Model using the new Decision Model and Notation (DMN) standard captures decision requirements and improves business analysis and the overall requirements gathering and validating process. The Object Management Group (OMG) finalized DMN in Spring 2015. Decision modeling is a powerful technique for business analysis and for enterprise architecture. While important for all software development projects, decision requirements are especially important for Decision Management Systems projects adopting business rules and advanced analytic technologies, providing a repeatable, scalable approach to scoping and managing the decision-making where rules and analytics are most effectively applied. Why model decisions Experience shows that there are three main reasons for defining decision requirements as part of an overall requirements process: Current requirements approaches don8217t tackle the decision-making that is increasingly important in information systems. While important for all software development projects, decision requirements are especially important for projects adopting business rules and advanced analytic technologies. Decisions are a common language across business, IT and analytic organizations improving collaboration, increasing reuse, and easing implementation. Gaps in current requirements approaches Today organizations use a variety of techniques to accurately describe the requirements for an information system. Most systems involve some workflow and this is increasingly described by business analysts in terms of business process models. Experience shows that when process modeling techniques are applied to describe decision-making, the resulting process models are over complex. Decision-making modeled as business process is messy and hard to maintain. In addition, local exceptions and other decision-making details can quickly overwhelm process models. By identifying and modeling decisions separately from the process, these decision-making details no longer clutter up the process. This makes business processes simpler and makes it easier to make changes. However identify decision-making as a task in a process (or as a step in a use case or as a requirement) can result either in long, detailed descriptions that are confusing and contradictory or short descriptions that lack the necessary detail. All this has to be sorted out during development, creating delays and additional costs. By modeling the decisions identified, a clear and concise definition of decision-making requirements can be developed. A separate yet linked model allows for clarity in context. Special needs of business rules and advanced analytics projects Successful business rules and analytic projects begin by focusing on the decision-making involved. For business rules projects, clarity about decision requirements scopes and directs business rules analysis. For advanced analytic projects a clear business objective is critical to success. Evidence is growing that specifying this objective in terms of the decision-making to be improved by the analytic is one of the most effective ways to do this. In both cases, then, it is essential to first define the decision-making required and only then focus on details like the specific business rules or predictive analytic models involved. Specifying a decision model provides a repeatable, scalable approach to scoping and managing decision-making requirements for both business rules and analytic efforts. Today, many business rules analysis can seem never-ending, with teams trying to capture all the rules in a business area. The result is often a big bucket orules that are poorly coordinated and hard to manage. Instead, by understanding which decisions will be made, when and to what purpose, it is now easy to tell when business rules analysis is complete. For analytics projects, established analytic approaches such as CRISP-DM stress the importance of understanding the project objectives and requirements from a business perspective, but to date there are no formal approaches to capturing this understanding in a repeatable, understandable format. Now business analysts have the tools and techniques of decision requirements modeling to identify and describe the decisions for which analytics will be required. How the data requirements support these decisions, and where these decisions fit, is clarified and the use of analytics focused more precisely. Read more in the Decision Discovery Section of the Appendix. Decisions as a shared framework and implementation mechanism Decision modeling provides a framework that teams across an organization can use and that works for business analysts, business professionals, IT professionals and analytic teams. Decisions are more easily tied to performance measures and the business goals of a project. This makes it easier to focus project teams where they will have the highest impact and to measure results. Many business analysts have known all along that decisions, and decision-making, should be a first class part of the requirements for a system. Systems that assume the user will do all the decision-making fail to deliver real-time responses (because humans struggle to respond in real-time), fail to deliver self-service or support automated channels (because there is no human available in those scenarios) and fail front-line staff because instead of empowering them with suitable actions to take it will require them to escalate to supervisors. What business analysts have lacked until now is a standard, established way to define these requirements. Decision modeling is a powerful emerging technique for business analysis. Using the standard DMN notation to specify Decision Requirements Diagrams and so specify a Decision Requirements Model allows the accurate specification of decision requirements. Enterprise Architects meanwhile are chartered with fitting business rules and analytic technologies like data mining and predictive analytics into their enterprise architecture. A service oriented platform and architecture, supported by integration and data management technology does not have obvious holes for these technologies. Decisions are both the shared framework and the technical mechanism to easily implement these technologies. Be transparent and agile The way Decision Management Systems make each decision is both explicable to non-technical professionals and easy to change. Decision-making in most organizations is opaqueeither embedded in legacy applications as code or existing only in the heads of employees. Decisions cannot be managed unless this decision-making approach is made transparent and easy to change or agile. As noted in Managing Decision Logic above this need for both design and execution transparency is the primary driver for the use of a Business Rules Management System to manage decision-making logic. Three main best practices are relevant in this areadesign transparency, execution transparency, business ownership and explicable analytics. Design Transparency for business and IT The first best practice in transparency is that of ensuring design transparency for both business and IT practitioners. Most code that is written is completely opaque as far as non-technical business users are concerned. Much of it is even opaque as far as programmers other than the one that wrote it are concerned. This lack of transparency is unacceptable in Decision Management Systems. Design transparency means writing decision logic such that business practitioners, business analysts and IT professionals that were not involved in the original development can all read and understand it. This allows the design of the decision-making to be transparent as everyone involved can see how the next decision is going to be made. This supports both compliance, by allowing those verifying compliance to see how decisions will be made, and improves accuracy by ensuring that everyone who knows how the decision should be made can understand how the system plans to make it. From a practical perspective this means writing all business rules so they can be read by business people (even those that will be edited by IT going forward) by avoiding technical constructs such as and terse programmer centric variable names for instance. It means ensuring that a business friendly vocabulary underpins the rulesthe use of IT-centric names for objects and properties is one of the biggest reasons business people cannot understand business rules. It also means using graphical decision logic representations such as decision tables and decision trees whenever possible and following rule writing best practices like avoiding ORs and writing large numbers of simple rules instead of a small number of large complicated ones. Design transparency is the fundamental building block for all other kinds of transparency and for agility. Execution transparency and decision logic logging It is essential to understand how the next decision will be made. Once decisions have been made, however, it will also be necessary to understand how they were made. The approach to the next decision will change constantly as business situations change or new regulations are enforced. The way the next decision will be made therefore diverges steadily from the way a decision was made in the past. Execution transparency means being able to go back and look at any specific decision to determine exactly how it was made. The decision logic and predictive analytic models used to make the decision must be recorded, logged, so that the decision-making sequence is clear. Ideally this should be left on all the time so that every decision is recorded rather than being something that is only used for testing and debugging. When every decision can be analyzed, ongoing improvement becomes much easier. In an environment where any decision can be challenged, by regulators for example, then such ongoing logging may be required. Most products support logging to a fairly technical format designed for high performance and minimal storage requirements. This will need to be expanded to be readable by non-technical users and integrated with other kinds of data (such as customer information or overall performance metrics) to deliver true execution transparency. Explicable analytics While the use of well formed business rules to specific decision logic makes the biggest single contribution to transparency, explicable analytics have a role also. When decisions are made based on specific predictive analytic scores it will be important to be able to understand how that score was calculated and what the primary drivers of the score were. Just like decision logic, the way a score is calculated is likely to evolve over time so it is important that the way a score was calculated at a particular point in time can be recreated. Some predictive analytic models are more explicable than others. The use of predictive analytic scorecards based on regression models, for instance, allows the contributions to a predictive score to be made very explicit and supports the definition of explanations, reason codes, that can be returned with the score. Thus a customer may have a retention score of 0.62 with two reason codes Never renewed and Single product that explain where that low score comes from. Decision trees, association rules and several other model types are also easily explicable. In contrast models such as neural networks and other machine learning algorithms as well as compound or ensemble methods involving multiple techniques are often much less explicable. The value of explicable analytic techniques varies with the kind of decision involved with regulated consumer decisions putting a premium on explicability while fraud detection, for instance, does not. Business ownership of change The final best practice is to focus ownership of change in the business. This means empowering the business to make the changes they need to the system when they need those changes made or when they see an opportunity in making a change. Business ownership of change is not essential for a successful Decision Management System. Many, most, of such systems still use IT resources to make changes when necessary. Often these are less technical resources, business analysts rather than programmers, but it is still IT that makes and tests and changes. Over time most organizations will find that business ownership will improve the results they get from their Decision Management Systems. By empowering business owners to make their own changes (using capabilities like business user rule management and impact analysis) organizations will increase their agility and responsiveness, eliminating the impedance of the businessIT interface. Empowering the business to own their changes is not a trivial exercise, however, and cannot be simply asserted (here you go, heres your new business rules interface now please stop calling us). An investment in suitable user interfaces and tools will be required along with time and energy invested in change management. Be predictive, not reactive Decision Management Systems use the data an organization has collected or can access to improve the way decisions are being made by predicting the likely outcome of a decision and of doing nothing. Decisions are always about the future because they can only impact the future. All the data an organization has it about the past. When information is presented to human decision-makers it is often satisfactory to summarize and visualize it and to rely on a humans ability to extract meaning and spot patterns. Humans essentially make subconscious or conscious predictions from the historical data they are shown and then make their decisions in that context. When building Decision Management Systems, however, this approach will not work. Computer systems and Business Rules Management Systems are literal, doing exactly what they are told. They lack the kind of intuitive pattern recognition that humans have. To give a Decision Management System a view of the future to act as a context for its decision-making we must create an explicit prediction, a probability about the future. Technology for this is described in Embedding Predictive Analytics and In-database Analytic Infrastructure above. Three best practices relate to this focus on turning data into insight. The use of data mining and other analytic techniques to improve rules and analyticIT cooperation are best practices in development approaches. A focus on real-time scoring will make for more powerful Decision Management Systems. Using data mining with business rules Many organizations building Decision Management Systems keep their rules-based development of decision logic and their use of analytics completely separate. At best they only bring the two disciplines together when they reference a predictive score in a business rule. This is a pity and a clear best practice is to do more to drive collaboration in this area, specifically by engaging data miners and data mining approaches in the development of business rules. To get started with this best practice the first step is to use analytical techniques to confirm and check business rules. Many business rules are based on judgment, best practices, rules of thumb and past experience. The experts involved in defining these rules can often say what the intent behind them isthat a rule is to help determine the best customers or to flag potentially delayed shipments for example. Historical data can be used to see how likely these rules are to do what is intended. For instance the number of customers who meet the conditions in a best customer rule or the correlation between the elements tested in the delayed shipment rule and actual delays in shipments. Using data in this way both improves the quality of business rules and helps establish the power of data to improve decision-making. While reporting and simple analysis tools can help in this area, the use of data mining is particularly powerful for these kinds of checks. More sophisticated organizations can also use data mining to actually find candidate business rules. Many data mining techniques produce outputs that can easily be represented as business rules such as decision trees and association rules. Using these techniques to analyze data and come up with candidate rules for review by those managing the decision-logic can be very effective. Because the output is a set of business rules it is visible and easy to review, breaking down the kind of reluctance that more opaque forms of analytics can provoke. At the end of the day the best practice is simple to defineorganizations should regard their historical data as a source of business rules just like their policies, best practices, expertise and regulations. Analytic and IT cooperation The power of predictive analytics is sometimes described as the power to turn vertical stacks of data (data over time) into horizontal information (additional properties or facts). Analytics professionals almost always look at data this way, seeking patterns in historical data that can be turned into probabilities or other characteristics, using analytics to simplify large amounts of data while amplifying its meaning. The challenge is that IT people do not think of data in the same way. IT departments tend to think of historical data as something to be summarized for reporting and as something to moved off to backup storage to reduce costs or improve performance. They are very familiar with the design of a horizontal slice of the dataits structurebut not with how it ebbs and flows historically. They will often change data structures to improve operations without considering how it might affect historical comparisons, clean data to remove outliers and to include defaults, or overwrite values as time passes and data changes. Many of these kinds of standard IT tasks are very damaging from the perspective of an analytics team. A clear best practice then is to improve analytic and IT cooperation around data governance, data storage and management, data structure design and more. In this context the analytic team cannot just be the Business Intelligence, dashboard and reporting team but must include those doing data mining and predictive analytics. While the former are often part of the IT department and well integrated with the rest of the IT function, the latter are often spread out in business units or focused in a risk or marketing function. Building cooperation over time between analytic specialists and IT will reduce costs, improve the value and availability of data for more advanced analytics and make integrating analytics into Decision Management Systems easier. Real-time scoring not batch A clear majority of organizations applying predictive analytic models today do so in batch. Having developed a predictive analytic model they run daily or weekly updates of their database, adding a score calculated from a model to a customer or other record in the database. When a Decision Management System needs access to the prediction it simply retrieves the column that is used to store the score. Integration is easy because the Decision Management System accesses the score like it does any other data item. The problem with this is that batch scores can get out of date when data is changing more rapidly than the batch is being run. For instance, a customer propensity to churn score that does not include the problem the customer had this morning or the inquiry they made about cancellation penalties is not going to be accurate. In addition this arms-length integration may be technically simply but it also keeps the IT and analytic teams from needing to work together and is therefore potentially damaging in the long term. For long term success with Decision Management Systems, and in particular to develop the kinds of Decision Management Systems that will allow you effective response to events and new more mobile channels, organizations need to develop systems that use real-time scoring. A real-time score is calculated exactly when it is needed using all the available data at that moment. This might include recent emails, SoMoLo (Social Mobile Local) data, the opinion of a call center representation on the mood of the customer and much more. Ultimately being able to decide in real time using up to the second scores, or even score data as it streams into a system so that predictions are available continuously, will be a source of competitive advantage. Test, learn and continually improve The decision-making in Decision Management Systems is dynamic and change is to be expected. The way a decision is made must be continually challenged and re-assessed so that it can learn what works and adapt to work better. Supporting this kind of ongoing decision analysis requires both design choices in the construction of Decision Management Systems and integration with an organizations performance management environment. Both Business Rules Management Systems and Predictive Analytic Workbenches have functionality to make this easier while Optimization Suites can be used to develop models to manage the potentially complex trade-offs that improving decision-making will require. This kind of continuous improvement relies on many of the features noted earlier such as being able to link decisions to business outcomes and results, having execution transparency and decision logic logging and support for real-time scoring not batch. In addition the development of integrated environments for ongoing decision improvement, broad use of experimentation and moving to automating tuning, adaptive analytics and optimization are all best practices worth considering. Integrated decision improvement environment To provide an integrated decision improvement environment, organizations should bring together the logs they have on how decisions have been made in the past, information about the business results they achieved using these decisions and the decision logicanalytic management environment itself. Each piece of this environment typically involves a different piece of technology to develop with everything from a business rules management system to an analytic model management tool to traditional dashboard and business intelligence capabilities being used. Providing an integrated, coherent environment where all this is brought together around a particular decision offers real benefits to an organization. When business results can be compared to the decision making that caused them and when the business owner can navigate directly from this analysis to editors allowing them to change future decision-making behavior, organizations will see more rapid and more accurate responses to changing conditions. Broad use of experimentation Relatively few organizations are comfortable with experimentation. For most, experiments are confined to the marketing department or to low volume experiments where customers and prospects are quizzed on preferences or likely responses. Some organizations use experimentation to determine price sensitivity and a growing number of web teams use experimentation for website design. Yet without experimentation it is very hard to see if what you are doing is the best possible approach or to truly see if a new approach would work better. Unless the behavior of real customers or prospects (or suppliers or partners) is evaluated for multiple options, those options cannot really be compared. Asking people what they would do if they got a different option rarely results in data that matches what they actually do when they get that different option. Organizations that wish to succeed in the long term with analytics and with Decision Management Systems will invest in the organizational fortitude and expertise required to conduct continuous and numerous experiments. Moving to automated tuning and adaptive analytics The logical extension of a focus on real-time is to focus on automated tuning and adaptive analytics. Today most Decision Management Systems and the analytics within them are adapted manually, with experts considering the effectiveness of the decision and the making changes to improve it. As systems become more real-time, however, this becomes increasingly impractical and suboptimal. Especially in very high volume, quick response situations such as ad serving, the system is continually gathering data that shows what works and what does not. Waiting until a person considers this data before changing the behavior of the system means allowing the system to make poor responses long after the data exists to realize this is going on. The best practice is to consider the use of machine learning and adaptive analytic engines in these circumstances. Building trust in the organization that analytics work will increasingly allow analytic systems to be left to make more of the decision themselves. Allowing analytic engines to collect performance data and respond to it, perhaps within defined limits, will improve the performance of real-time decision making while reducing the length of time it takes to respond to a change. Not all decisions are suitable for these kinds of engines. For instance those decisions that have a strong regulatory framework or where the time to get a response to a decision is long will not work well. Where a decision is suitable, however, a clear best practice is to integrate these kinds of more adaptive engines into Decision Management Systems. Optimization One final best practice in this area is to increase the use of optimization over time. A powerful approach, optimization is often siloed into specific parts of the business and regarded as a little bit of a side bar to core analytic efforts. In part this is because the mathematics can be very complex and because the solutions can take a long time to develop. A lack of business user friendly interfaces for reviewing results and a need to integrate optimization with simulation tools also limit the use of optimization in many organizations. This is beginning to change, however, as more business friendly interfaces are developed and as optimization tools become more integrated into the overall stack for developing Decision Management Systems. Faster and more stable optimization routines, standard templates and integration with both predictive analytics and business rules are also helping. Organizations should regard the use of optimization as part of their decision design and improvement processes as a best practice and should seek therefore to bring it out of its silos and into the mainstream. There are many compelling use cases for Decision Management Systems. Any time an organization must make a decision over and over again and where the accuracy or consistency of that decision, its compliance with regulation or its timeliness are important, Decision Management Systems can play an important role. Organizations can often spot such decisions by including a decision requirements step in their enterprise architecture framework and then looking for decision words such as determine, validate, calculate, assess, choose, select and, of course, decide. For instance: Determine if a customer is eligible for a benefit Validate the completeness of an invoice Calculate the discount for an order Assess which supplier is lowest risk Select the terms for a loan Choose which claims to Fast Track As noted in Suitable Operational Decisions, the decisions have certain characteristics that make managing decision logic, optimizing trade-offs and embedding predictive analytics valuable. It is useful to categorize decisions into various types, though some decisions include characteristics of several types. For instance: Eligibility or Approval Is this customerprospectcitizen eligible for this productservice This are made over and over again and should be made consistently every time. The use of a business rules-based system to determine eligibility or to ensure that a transaction is being handled in a compliant way is increasingly common. These decisions are policy and regulation-heavy and the use of a Business Rules Management System to handle all the business rules is very effective. While eligibility and compliance decisions can seem fairly static, changes are often outside of the control of an organization and can be imposed at short notice. Validation Is this claim on invoice valid for processing Validation decisions almost always operational, they are overwhelmingly rules-based, and the rules are generally fixed and repeatable. Validation is often associated with forms and online versions of these forms are of little use without validation. The move to mobile apps makes validation even more important. Calculation What is the correct pricerate for this productservice Calculations are usually operational and they are overwhelmingly rules-based. The rules are generally fixed and repeatable but making them visible and manageable using business rules pays off when changes are required or when explanations must be given. Sadly calculations are often embedded in code. Risk How risky is this suppliers promised delivery date and what discount should we insist on Making a decision that involves a risk assessment, whether delivery risk or credit risk, requires balancing policies, regulation and some formal risk analysis. The use of business analytics to make risk assessments has largely replaced gut checks and predictive analytic models allow such risk assessments to be embedded in systems. Fraud How likely is this claim to be fraudulent and how should we process it Fraud detection generally involves a running battle with fraudsters, putting a premium on rapid response and an ability to keep up with new kinds of frauds. Managing the expertise and best practices required to detect fraud using business rules gives this agility while predictive analytics can help with the kind of outlier detection and pattern matching that increases the effectiveness of these systems. Opportunity What represents the best opportunity to maximize revenue Especially when dealing with customers, organizations want to make sure they are making the most of every interaction. To do so they must make a whole series of opportunity decisions such as what to cross-sell or when to upsell. These decisions involve identifying the best opportunity, the one with the greatest propensity to be accepted, as well as when to promote it and where. A combination of expertise, best practices and propensity analysis is required. Maximizing How can I use these resources for maximum impact Many business decisions are made with a view to maximizing the value of constrained resources. Whether it is deciding how best to allocate credit to a card portfolio or how best to use a set of machines in a production line, deciding how to maximize the value of resources involves constraints, rules and optimization. Assignment Who should see this transaction next Lots of business processes involve routing or assignment. In addition when a complex decision is automated it is common for some percentage to be left for manual review or audit. The rules that determine who best to route these transactions to and how to handle delays or queuing problems can be numerous and complex, ideal for managing in a Decision Management System. Targeting What exactly should we say to this person In many situations there is an opportunity to personalize or target someone very specifically. Combining everything known about someone with analytics predicting likely trends in their behavior and best practices, and constraining this to be compliment with privacy and other regulations, individuals can feel like the system is interacting only with them. The rest of this chapter will focus on specific use cases that have been handled using Decision Management Systems. Every one of these examples has been automated and represents a client either of Decision Management Solutions or of one of the vendors in the report. If you want to know more about any of them, email us and we will connect you with additional information. The use cases in this section are divided up into a number of categories. Some of these are verticals, such as the section on government operations, while others are focused on categories relevant to multiple industries such as fraud detection or personalization. Within each section a number of real examples are explained but this is not an exhaustive list of possible use cases. Many organizations suffer losses from fraud and abuse. These range from fraudulent claims for services that were never performed, to applications for credit for people that dont exist, to orders that include bribes and illegal payments. In every case an organization must decide whether to accept the transaction as valid, reject it or investigate it for fraud. These decisions are high volume as they must be made for each transaction and are ideal for automation using a Decision Management System. Fraud detection systems typically involve business rules for compliance with policies and regulations as well as predictive analytics to match the current transaction to patterns known to be fraudulent or identify that the current transaction looks very different from legitimate ones. A wide variety of fraud detection and handling Decision Management Systems are built and fraud detection is one of the primary use cases for Decision Management. Specific examples of use cases are listed below and it should be noted that all these decisions are increasingly combined into an integrated fraud management system. Transaction is fraudulent The basic fraud detection use case. Organizations will withhold payment, withhold partial credit or decline a payment to prevent fraud. Suitable transactions include warranty claims, insurance claims, credit card payment, auction payment, tax returns and many more. Besides the basics of declining or only partially paying, some Decision Management Systems will identify transactions that require follow-up, such as a call from your credit card issuer, even though the transaction was accepted. Application fraud A variant on transaction fraud is application fraud. For instance when a consumer or organization is applying for service, especially one provided on credit or involving other risks to the provider, a Decision Management System can be used to determine if the application is fraudulent and how to handle it in terms of review or rejection. Identity Fraud When someone applies for a service or product, or makes a transaction, it is important that they are who they say they are. At other times, also, the use of a Decision Management System to identify potential identify fraud is highly valuable. Such systems can be part of preventing application or transaction fraud but can also be used independently, such as for security or access control. Supplier or provider is fraudulent Even when a transaction appears valid it is possible that it is associated with a provider of a service or good that has a pattern of behavior that suggests fraud. Decision Management Systems are used to identify those suppliers or providers of service, in healthcare for instance, that have a pattern of such behavior so that even apparently valid transactions can be reviewed before being paid. Fraud network The newest Decision Management Systems in fraud are focused on fraud networks. These decide if the combination of customers, suppliers, inspectors and auditors, or the combination of doctor, patient, pharmacist and claimant together represent a fraud risk. Each of the individuals may seem fine, and the transaction likewise, but the network is fraudulent. Another well established area for Decision Management Systems is that of underwriting and origination. Whether originating loans, mortgages or credit, or underwriting insurance, these decisions offer a strong use case for Decision Management Systems. Often regulated and constrained by policy, these decisions can be effectively managed using business rules. An assessment of risk is often critical to deciding what price or terms to offer: higher risk customers must provide more documentation or pay a higher interest rate. The use of predictive analytic models to predict risk in these circumstances is also well established. Combining these business rules and predictive analytic models into a Decision Management System is a very effective tool for automating the underwriting decision. A series of set of decisions are typically involved in originating or underwriting and Decision Management Systems have been built for many of these. An initial calculation of the likely price drives a quoting decision. Some Decision Management Systems provide only an estimated quote while others use more complex decisioning, including a risk assessment, to produce a bindable or committed quote that the company is willing to stand behind. Estimated quotes are often easier to generate with less data, making them appealing to users in a hurry while bindable quotes typically involve more data input and time but are more solid. Underwriting Underwriting or originating the loan or insurance product typically involved applying both rules (from regulations and policies) and making some kind of risk assessment (credit risk, insurance claim risk etc) by predicting the likelihood of one or more bad outcomes using predictive analytic models. Such systems often replace manual decision-making, improving consistency, removing bias and freeing up underwriters or loan officers to focus on complex cases and the overall process. Some forms of origination and underwriting are sufficiently complex (commercial loans and insurance for instance) that the role of Decision Management Systems is largely in helping a human user either by making some of the component decisions within the overall decision or by at least eliminating options or choices that are not allowed in the circumstances. Pricing a loan or policy is sometimes a separate calculation decision managed by a Decision Management System whether or not the decision to underwrite is automated. These are typically based purely on calculation rules. Payroll deduction calculation When applicants are approved for insurance or loans there may be additional calculations that can be automated. One example is a Decision Management System to calculate appropriate payroll deductions and the tax implications of same. How to approve this request Some organizations are not interested, willing, or able to automate the decisions themselves. Even in these circumstances Decision Management Systems can play a useful role. Organizations have built Decision Management Systems to manage the approval process (applying regulations and restrictions on how approval is managed and who is involved), to identify the forms and proofs necessary prior to approval and more. Even when the business decision is left to a human user, Decision Management Systems can improve throughput and efficiency. By handling decisions such as readiness (do we have all the paperwork we need), assignment and routing they can make the manual decision-making flow more quickly and efficiently. The use of Decision Management Systems to focus marketing efforts more effectively is becoming increasingly common as the cost of building and operating Decision Management Systems drops. Where in the past the value of each individual decision had to be quite high to justify a Decision Management System (thus fraud and risk-based decisions dominated), modern platform technologies and pre-configured Decision Management Systems can be used even when the value of each decision is very low. For instance the difference between a good cross-sell decision and a bad one may not be very great, while the difference between a good loan origination decision and a bad one may result in thousands of dollars of losses. Organizations focused on becoming customer-centric are increasingly turning to an approach known as next best action or next best activity (some more grammatically precise organizations talk about best next action). Such an approach involves considering every action that the organization could take towards a customermaking a cross-sell offer, collecting new information about customer preferences, reminding them to use a product they already ownand ensuring that each opportunity for interaction uses the best one for long term customer value. This focus on actionsnot just offersand a desire to centralize and systematically improve the selection of the best action drives a need for a Decision Management System focused on this decision. While these are not limited to marketing actions (they include service and support issues) they are typically rooted in Marketing. See also personalization below. Targeted Marketing Organizations are trying to ensure that their marketing is more relevant and targeted. They are dividing customers and prospects into increasingly small segments using analytics and then focusing messaging on these segments. Combing business rules and predictive analytics to effectively target every prospect, this approach to targeted marketing relies on a Decision Management System at its core. The need to replace blanket campaigns that send the same offer to everyone with something more focused drives the need for a Decision Management System. Next Best Offer The classic marketing Decision Management System is to calculate the next best offer for a customer. Such systems apply best practice and contact rules as well as predictive analytic models for propensity to buy to determine which of a companys products are most appropriate as the next product for a customer. This then drives promotional activity. Cross-sell Related to the next best offer approach is the use of Decision Management Systems to drive cross-sell. Companies are developing these systems to suggest appropriate cross-sell offers in call centers as well as driving them into the check-out process online. Some are even using them in store locations. Improved cross-sell drives higher basket value and can improve loyalty by creating customers with more (product) connections to a company. These decisions are increasingly managed across product lines or lines of business, further increasing the value proposition of a centrally managed system. In an almost identical fashion, companies are using Decision Management Systems to identify a product from within the line of business that is more profitable or advantageous than the one the customer is currently planning to buy. These systems tend to stay within a line of business and are evolving from being rules-based to including analytics to predict what is likely to be accepted so that upsells are not made when they will simply irritate a customer. Customer Next-Best-Action As noted, some organizations are evolving their marketing and support systems to a next best action approach. These Decision Management Systems coordinate all possible actions (sell this additional product, encourage use of this service the customer already has, recommend this product fix or FAQ answer, ask the customer for this clarification on their data) and selects the one that is most likely to move the customer conversation along and build long term value. These systems involve business rules about who to contact and when as well as definitions of product or action eligibility while predictions of propensity to accept and of likely future profitability are at the heart of effective choices. The marketing department typically drives much of this but customer service and support must be involved also if the system is to truly focus on next best action. Determine coupon Some businesses rely on coupons and on getting coupons (whether paper or electronic) into the hands of customers who will use them in a way that boosts a companys bottom line. Decision Management Systems are used to determine which customers are eligible for which coupons and increasingly to focus coupon spend analytically where it is likely to have the most impact. Personalize offer Marketing in some organizations is moving beyond segments and standard offerswho gets which offerto a focus on personalization. These organizations are personalizing their interactions with customers and prospects using everything they know about a prospective customer. Moving beyond just using names and locations, these Decision Management Systems are making a micro decision about each prospect, generating messages and contact strategies specific to that prospect. Not which customers get this offer but what should we say to this customer right now. Change or prevent behavior Some Decision Management Systems send communications designed to change behavior on the part of prospects or customers. These are not necessarily focused on offers or products but send specific information designed to provoke a short term or long term change in behavior. For instance Decision Management Systems have been built to target someone to increase the likelihood they will make a bequest, to increase their loyalty, to reduce the likelihood they will churn and more. These systems use predictive analytics, to identify those most likely to make a bequest for instance, and then use the factors that drive this model to see what content or communication might influence others to do likewise. These systems can also be very real-time and responsive, responding rapidly to competitors and identifying those customers who will be impacted by a change and targeting them with content most likely to counteract that competitors behavior and so sustain loyalty. Personalization may seem like the realm of Marketing but in fact Decision Management Systems have been used to drive a wide range of personalization beyond that used in marketing offer management. These systems take what is known about a userinformation about them, past history, preferences and increasingly predictions about their likely interests and future behaviorand uses this to personalize some interaction with them. These Decision Management Systems replace one size fits no-one interactions with intensely targeted interactions, allowing users to feel that they are known and helping navigate increasingly large pools of information. Determine relevance of content As organizations try to help users navigate huge volumes of content, Decision Management Systems are increasingly being used to decide what content is relevant to a particular user. Such systems are often necessary as traditional agents or intermediaries are no longer available. In travel, for instance, travel agents used to act as a filter on content for travelers. Now, with more people booking on line, a Decision Management System is needed to provide that same filtering as otherwise there is simply an overwhelming amount of information to review. Customize advice While there is a lot of generic content available today, consumers increasingly look for advice customized to them. By applying expert rules and analytics, Decision Management Systems can be used to customize the advice being given. For instance when giving people weight loss or pain management advice, the results of a questionnaire can drive sophisticated rules and analytics based on medical best practices and research to produce advice that is tailored, specified and relevant. Configure Offer, product or service In similar fashion some organizations are using Decision Management Systems to configure offers, products or services. Whether computers that are assembled from a wide variety of parts, trucks that can be ordered to meet personal needs or vacation packages, it is often a non-trivial exercise to determine that a particular configuration is allowed or buildable. Decision Management Systems are used to suggest configurations, to match configurations to stated goals and to confirm a custom configuration. Optimal price for this customer As dynamic pricing has become more common, determining the optimal price for a specific customer has likewise become more common. Using a Decision Management System to correctly price a product or service for a specific customer based not only on their needs and configuration but the value they will place on the product and potentially their ability to pay allow companies to maximize the value of their sales. As more data and more sophisticated systems become available this is focusing on individual customers not just customer segments for truly personalized pricing. Collections, chasing down those who owe money to the organization and collecting it, is a complex problem. Traditionally handled with large teams of people 8220dialing for dollars8221 and a first-in, first-out or highest dollar value approach to prioritization, collections can be made dramatically more effective using Decision Management Systems. See also personalization above. Next best action Some forward looking organizations are using Decision Management Systems to assign collections agents to work dynamically. Instead of having each agent work through their own queue, these systems dynamically prioritize the available collections work and assign it to agents as they become available. Using everything known about the overdue payment, predictions of the likelihood that someone will pay and even the skills of the collection agent, these systems determine the next best collection action. There is a general move toward next best action systems across the board. Whether it is actions for customers, actions for collections agents, audits or quality reviews, focusing limited resources on the next best action adds value when it replaces traditional, first-infirst-out systems. How to handle non-payment Even when using standard queuing and assignment systems, collections organizations can benefit from Decision Management Systems. In particular the use of business rules and predictive analytics to determine the most appropriate way to handle non-payment situations is effective at reducing unnecessary calls and increasing collection rates. By identifying those most likely to simply have forgotten and prioritizing a simple reminder, by predicting the amount someone can pay and the likelihood they will stick to a commitment, as well as by ensuring consistent application of collections policy, Decision Management Systems can dramatically improve the way non-payment situations are handled. While many of the scenarios identified as candidates for Decision Management Systems are commercial, government operations can also use them to improve the effectiveness and efficiency of public sector organizations. More heavily focused on the use of business rules to enforce regulations and associated policy, public sector Decision Management Systems can improve consistency, provide enhanced self service for citizens and demonstrate compliance. The growing use of predictive analytics in these systems can also help target constrained government resources where they will do the most good. Benefit eligibility Perhaps the most common Decision Management System in public sector, the use of a rules-based system to determine who is, and who is not, eligible for a benefit or service has clear benefits. Not only is the system consistent, always applying the same rules, it is available 242157, improving access for all. Such a system can also be readily changed when regulations change or even when court cases demand exceptions or updates. Benefit calculations Related to eligibility is the calculation of benefits. While some benefits are straightforward to calculate, others can be very complex. When multiple factors must be considered, complex questionnaires processed and tax returns consulted to determine the correct value of a benefit, a Decision Management System can dramatically improve both response time and accuracy. Tax or fee calculations Government agencies must often make complex calculations of taxes or fees (such as vehicle license fees or business registration fees) owed. These calculations can get complex and, perhaps even more importantly, are much more prone to change than the systems and processes of which they are part. By separating out the calculation as a Decision Management System an agency can create a stable process, for registering cars or handling tax returns, while retaining the ability to make rapid and effective changes to the calculation. Permits or other paperwork needed One of the most frustrating processes for citizens is often determining which permits or paperwork is needed for a particular activityto modify a house for instance or apply for a grant. Using a Decision Management System to help citizens navigate these kinds of decisions reduces their frustration and allows limited resources to be applied to solving problems not discussing paperwork. Putting the decision firstdeciding what paperwork is required and then processing itcan also dramatically simplify the processes involved. Submission completeness and approval As government agencies have developed online interfaces for forms, allowing citizens to submit paperwork electronically, they have created the opportunity for new uses of Decision Management Systems. If a form can be submitted electronically then a Decision Management System can be used to check that it is complete. It can also do so intelligently, using data entered in one part of the form to make sure that other parts are filled out correctlymoving far beyond simply mandatory fields or defined ranges for values. Predictive analytics can be added to flag potential fraud where appropriate, allowing the automatic processing of complete, low-risk applications and manual review of others. Audit selection Many government departments must decide who to audit. These audits often uncover unpaid taxes, fraud or abuse. Yet the groups that conduct these audits are constrained by budgets and headcount limits in ways that mean that not all potentially useful audits can be conducted. A Decision Management System can use expert rules as well as predictive analytics to prioritize the most potentially valuable audits and do so with transparencythe rules being applied will be clear so there is no chance of bias or favoritism. Some organizations have even gone to a next best audit approach, dynamically assigning auditors to investigations as they become available. Targeting resources Another use of Decision Management Systems to maximize the value of resources comes in targeting scarce resources where they will do the most good. Police forces can assign patrol cars or beat offices to neighborhoods, educational authorities can assign advisors to at-risk students, and social services can assign case workers using Decision Management Systems. These can apply not just policy and best practices but also predictions of risk (of crime, dropping-out of high school etc) and of the most effective intervention to maximize the value of resources in terms of overall results. Identify fraud, waste and abuse Finally there are many ways to use Decision Management Systems to identity fraud, waste and abuse. This includes identifying fraudulent tax returns, providers who are inefficient users of government grants and even of people making unnecessary emergency calls. By flagging these transactions and individuals, Decision Management Systems focus government budgets where they will help most, reducing the cost of a given level of service. See also fraud detection above. Decision Management Systems are just beginning to penetrate supply chain management. There is tremendous potential for Decision Management Systems in this area, particular as organizations look for ways to bring predictive analytics to bear on their supply chain. By focusing predictive analytics on specific orders or shipments, Decision Management Systems make it possible to effectively apply more advanced analytics even in very complex supply chains. Many examples exist despite a low overall penetration rate. Eligible supplier One of the most basic use cases for Decision Management Systems in the supply chain is that of determining eligible suppliers. For organizations with large numbers of suppliers, especially for those where commodity products are sourced from many competing suppliers, the automated determination of eligible suppliers can be a big time and cost saver. Allowing organizations to determine for themselves if they could become a supplier and allowing each country or product line to add its own additional criteria for eligibility are additional reasons for using a rules-based approach to determine eligibility in a flexible way. Best supplier selection While supplier eligibility can be made more efficient using a Decision Management System, it is also possible to become more effective in the use of suppliers by automating the selection of the most appropriate for a given order. Using both eligibility rules and predictive analytics that show the likelihood of on-time and to-specification deliveries for example can create a system that automatically selects suppliers based on the right balance of cost, speed and quality given the circumstances of the order. Such systems improve straight through processing, reducing the need for human involvement in increasingly complex supply chains. Routing and Shipping Selection As supply chains become more complex and distributed, it is also increasingly hard to know how to route a delivery or what shipping mechanism to use. When those shipping the order are not those paying for it, as is often the case when many small manufacturers are tied into a global supply chain, real-time determination of the right thing to do is essential. A Decision Management System can apply best practices, policy, short-term deals offered by shippers, current traffic problems and more to come up with the best shipping option and the best routing, reducing costs throughout the supply chain. Reorder levels and alerting While many supply chain systems have automated thresholds for re-ordering or for alerting a user that stock is low, these are often simplistic. The reality of a modern supply chain is that the amount of stock, and where that stock should be, is highly variable. It can depend on the season, on trends in sales, on competitive behavior, on marketing campaigns and much more. Using a Decision Management System allows multiple sources of rules to be applied to the decision, allows the integration of predictions and forecasts, and supports short-term adjustments and tweaks as necessary. Many organizations must use assets, fixed plant for instance, as effectively as possible if they are to operate profitably. The use of a Decision Management System to improve decisions about such assets is still relatively unusual but there is a growing set of examples. Particularly as more equipment is instrumented and connected to a network, the value of a Decision Management System for making targeted decisions specific to each asset is rising. Service needs One of the most basic uses for a Decision Management System is the identification of service needs. Today most assets are serviced on a fixed basis. However as usage data is collected for a specific machine or piece of equipment it becomes possible to calculate unique service needs for that piece of equipment. Thus a tractor being driven more aggressively, though traveling the same number of miles and the same age as another, will be identified as needing service more often. This keeps equipment healthy longer while also eliminating unnecessary services. Validate usage This same increased instrumentation is driving remote monitoring and advice to new levels. When a piece of equipment is constantly monitoring its own usage and logging this information, a Decision Management System can be used to check that the usage is appropriate. For instance if heavy equipment is being left idling too much during a particular shift or if a particular operator is heavy handed in some way, this can be flagged and remedial actions suggested. This uses a Decision Management System to provide supervision through the remote logging. Service needs and potential failures can also be flagged and alerts issued. Preventative Maintenance Failures and problems with expensive assets can result in extensive, and costly, downtime. The ideal for many organizations is to fix things before they become critical to minimize the risk of such downtime. Decision Management Systems can use predictive analytics to identify assets at risk of failure and then use rules to assign an engineers spare time to check the asset or extend a scheduled visit to proactively fix something early. Proactive use of asset If assets are not in continuous usage then there is a potential opportunity to use the asset for some other activity during its down time. Deciding what to do with otherwise idle assets is increasingly something that can be automated using a Decision Management System based on the prediction of the likely value of the various possible actions. Manufacturing is another area where Decision Management Systems are being introduced. Many manufacturing operations are large scale with huge numbers of potential decisions to be made. As customers demand more customized products, organizations must also customize at scale. This means that tasks and work allocations that used to be identical across production runs must be customized and tweaked for different customers or batches. Decision Management Systems offer a new level of control. See also process efficiency below. What to make One of the most basic decisions is that of what to make. When an organization manufacturers for stock, rather than specifically for an order, it must constantly choose what to make and what not to make, what colors to pick, what packaging sizes to use and so on. A Decision Management System can use predictions and forecasts, current stock levels and more to decide what is most appropriate to make at any given moment. Allocation and configuration of machines Especially in complex manufacturing situations, the allocation of work to a machine and the configuration of that workstream can be critical to the overall effectiveness of the line. Many machines might be capable, if configured correctly, of handling specific tasks and specific tasks might be assignable to many machines. Decision Management Systems can handle the complexity of this kind of situation, applying the rules that determine which machines can do what and combining them with predictions and even optimization to ensure maximized results. Reducing manufacturing problems In complex manufacturing scenarios there is a constant risk that problems might be introduced to the finished product. The wrong part may be used, something may be damaged by the production process or a task may take an unexpectedly long time. Using Decision Management Systems can reduce these problems. Systems can assign quality improvement actions to the QA team, replacing fixed check lists with dynamic next best check systems while also assigning supervisory and training resources for proactive mitigation of potential problems. Predictions of risk, rules about skill levels required and certifications and much more can be used to drive increasingly sophisticated decision-making on the production floor. Governance, Risk and Compliance is a broad topic that is a serious area of focus in many regulated industries. Ensuring that everything is done according the regulations, enforcing and managing a governance environment, and tracking and accounting for all appropriate risks can be a daunting task. Attempting to do all this manually is prohibitively expensive. Decision Management Systems provide the leverage organizations need to effectively manage their GRC approach. Data Management When it comes to data regulations can prescribe what data should be stored, what cannot be stored, what must be anonymized and much more. Who can access the data, under what conditions and with what degree of supervision may all be spelled out. Reporting can be specified too, documenting what must be reported to whom and by when. All of this data management can be enforced and managed with a Decision Management System, avoiding fines and reducing overhead. Using rules to automatically manage all the explicit guidance and integrating analytics to help with detecting identify fraud makes keeping data safe, secure and appropriately available practical. Unlike manual processes, Decision Management Systems scale up quickly and respond in real-time as data flows through your systems. Authorization Another significant issue in GRC is who can approve who and what. Preventing people from (even indirectly) approving their own expenses, trades or data access is important and increasingly complex in matrixed organizations. Again identify theft prevention is critical if authorization schemes are to be robust and believable. Ensuring that a single coordinated set of business rules drives authorization across multiple systems is a great role for a Decision Management System. If, despite everything, things go wrong, GRC systems need to alert the right people, give them the right information and do so quickly enough to avoid additional fines and problems that can result from delay. Rather than pushing dumb alerts to someones desktop (and hoping they respond), a Decision Management System can act automatically and work its way through the right set of escalations and notifications, even when this chain of events is complex and changes often. Healthcare is an industry increasingly rich in data. Yet simply applying analytics, doing simply what the data tells you, is not really practical in an industry where peer review, published best practices and government regulation abound. Decision Management Systems, with their combination of rules and analytics, are ideal for this environment. As more of the healthcare industry is computerized and more data is collected, Decision Management Systems are playing an increasingly important role. As healthcare goes mobile, helping patients live at home and treat themselves, this is only going to increase. Identify drug interactions and other issues One of the most common uses of Decision Management Systems in healthcare is to identify potential problems in prescriptions. Identifying potential drug interactions and checking dosages prescribed against patient details involves large numbers of rules gathered from best practices, medical research, drug companies and more. Providing these checks in the hospital as nurses administer drugs, at the pharmacy as prescriptions are fulfilled and warning doctors about potential issues can all be driven from the same rules ensuring consistency and reach. Determine treatment The best practice in healthcare evolves continuously. New therapies, new suggestions, new drugs and new ways to match a patient to a therapy, using genetic matching for instance, make it hard for medical professionals to stay up on the latest treatment. Especially when multiple possible treatments can be proposed, selecting the one most likely to work for a particular patientpersonalized medicineis complex. Decision Management Systems engage medical professionals in managing their own rules, bring analytics to bear as data is gathered regarding what works, and easily stay current as best practices and guidelines change. Target at-risk people While we might wish we could always apply all the resources that might help to a medical problem, the reality is that we cannot. Determining which patients are most at risk and what kinds of interventions are likely to have the biggest impact is a fact of life for most healthcare organizations. Using analytics, especially predictive analytics, as well as expert rules and best practices, a Decision Management System can ensure resources are applied effectively to those most at risk. Scheduling Healthcare, like many labor intensive industries, involves complex schedules. Making sure that the relevant specialties are available at the right time and place, managing staffing to match demand, ensuring that operating rooms are prepped before they are neededall this makes scheduling in healthcare difficult. Decision Management Systems can use rules and optimization to come up with the most effective schedules possible, given the constraints, saving money and lives at the same time. To wrap up this discussion of use cases, some very generic examples. Many industries share common problems that can be lumped into a focus on process efficiency and effectiveness. Decision Management Systems, by handling critical decisions in those processes, can make a big difference on both counts. Some examples follow. Validation Has enough information been entered Does it match other information available and is it internally consistent This kind of rules-based validation is common to many processes and using a Decision Management System to automate this check speeds processing and reduces manual overhead. Completeness and readiness Many processes have steps that are more expensive, such as conducting an inspection or writing a contract. By automating a check to see if the process is ready to go to the next stepdo we have all the information needed to effectively inspect this ship or building, to put a contract together for this deal or annuityDecision Management Systems ensure that processes only move on when it makes sense to do so. Plausibility An interesting variation comes in situations where only a human can really tell if something is true or not, such as a customs declaration. A Decision Management System might use rules and analytics to determine how plausible such a declaration is, helping focus limited resources where they will do the most good. Assignment or allocation Many processes involve assignments and allocations: decisions about who to make responsible, how to allocate the work involved, and who should do what. When processing speed is important, or when consistency and traceability are a must, a Decision Management System can provide rapid, agile, compliant processing. Sequencing and Adaptive Case Management Many processes are increasingly modeled using a more adaptive approach. Instead of laying out all the steps and branches, different clusters or groups of tasks are identified that may need to be handled for a particular transaction or case. Deciding which need to be included, and when, is a task ideally handled by a Decision Management System that is monitoring the case and constantly evaluating the most appropriate and necessary steps for the case. Dynamic forms Collecting data from people is a constant challenge. While sometimes a simple form or a form with a few options works well, sometimes it is very difficult for a user to determine what data is required. Each question they answer drives the need to answer, or not answer, subsequent questions. This kind of dynamic questioning is another good user case for Decision Management Systems. Dynamic Checklists Checklists are a powerful tool for improving the effectiveness of staff members. But to work a checklist must be very specific. Trying to handle even a small number of situations with a single checklist can make for complex checklists with lots of navigational instructions to make sure the right items are checked at the right times. Instead a Decision Management System can be used to drive dynamic checklist. Very specific checklists generated for each circumstance. All the checks needed for that circumstance but only the ones needed. Besides these specific use cases, there are some key characteristics of decisions that make them suitable for automation using Decision Management Systems. These are discussed more fully in the book but suitable decisions have four characteristics: Repeatable. If a decision is not made in a repeatable way and made regularly it will not be possible to automate it nor to show a return on doing so. Non-trivial. A decision must have a degree of complexity to make it worth the investment in additional capabilities discussed above. There must be policies or regulations that drive and control the decision, a degree of expertise involved in making it well or some analysis of information required. Measureable business impact. It must be possible to tell what the business impact of improving the decision will be, and even what a good decision is relative to a bad one. If the value of improvement cannot be described or worse yet the value of a decision cannot be measured at all then it will not be possible to show the value of a Decision Management System. Suitable for automation. Every organization has a different attitude to automation. Unless the organization is willing to consider a system to make the decision there is no point in building a Decision Management System. A decision that must be taken by a person might involve dependent decisions that are suitable candidates for automation but building a Decision Management System to automate a decision that an organization believes should be taken by a person will result in a system that does not get used. Suitable decisions often break down into rules-centric decisions such as eligibility, validation and calculation and analytic-centric decisions such as those relating to risk, fraud and opportunity. Big Data is often described in terms of an increase in volume, an increase in velocity and an increase in variety: More data, of more types, arriving more quickly. This increase in Big Data volume, variety and velocity has clear implications that are driving the business case for decision management systems. In an era where we must handle more data, and where the rate of increase in data is itself increasing, we have to face the limitations of human interpretation. Where we might have once assumed that a person could look at and usefully interpret all the data that might be relevant this is increasingly impractical. Big data volume has two main implications for Decision Management Systems. First it makes the case for automating decisions stronger. Computers are generally much better at looking at lots of data and at doing so quickly enough to be useful. They can be set up to balance recency versus long term trends and avoid many of the data interpretation problems that beset human decision-makers. As data volumes accelerate into the stratosphere, Decision Management Systems are your allies in making sense of all this data. Second it makes the case for industrializing analytics stronger. The basic premise is that as there is more data so you need more analytic models and you need those models to be built more efficiently. This means applying more automation and more technology in the process of building the models themselves. This could be through machine learning, through fully automated modeling capabilities or through automation added to tools for data scientists. It also means applying the latest in in-memory and in-database technologies to decrease the time all this modeling takes. The days when an individual modeler could hand craft a complete model, sampling data carefully and doing every step by hand are gone. Big Data involves adding more types of data, from more sources inside and outside of the organization, to your analytic toolkit. Social, mobile, local and cloud data sources are exploding and organization must find ways to take advantage of these before their competitors do. This means that the old approach of pulling together all your data into a 360 degree view simply wont work any more. You will never get caught up as there is ALWAYS going to be another potentially useful data source. Instead first model the decisions that impact your business and focus on integrating and delivering the data sources you need for a given decision. Variety also has two particular implications for Decision Management Systems. First it means you have to broaden your definition of data infrastructure. Many (most) Decision Management Systems rely on an operational datastore that is relational and use analytic models built entirely from structured data. With the explosion of new data sources, often unstructured or semi-structured formats, this is not going to work anymore. Your analytic team is going to need to be able to access data stored in a variety of formats (stored on Hadoop for instance) and your operational systems may need to consume less structured records and make decisions against them (what to do with this sensor record, for instance). Same problems (how to build analytics, how to make decisions) but lots of new data sources to deal with. Second you will need to improve your skills in text analytics and entity analytics. Being able to identify what is being discussed, especially what products or actions, are being discussed in unstructured, text data sources is key. You need to be able to tell that this email is about this product, that this customer keeps talking about the call center, etc. and feed that insight into your modeling and your Decision Management Systems. As more data arrives more quickly we have to deal with velocity in two ways. We have to decide more quickly and we have to deal with data in motion streaming data not just data at rest. The first of these, like the increase in data volumes, simply increases the value of a Decision Management System. As our data arrives more rapidly the value of processing and acting on that data in real-time is likely to grow. This need for real-time responses pushes us inexorably towards Decision Management Systems because people just dont make decision in real-time. As real-time becomes the right-time, we must automate decision-making. This increase in velocity also tends to make decision value decay more quickly. The value of a decision decays over time (decision latency) and the increasingly rapid arrival of new data means that decisions will decay faster as new data will make the old decision less relevant. This has a side effect of also making predictive analytics more valuable. With less time to decide it becomes more important to have some predictive headroom the further out I can see the more time I have to respond. With slow moving data it might have been enough to see yesterdays summary or todays. As data moves more rapidly we must see in the future, make predictions, if we are to have time to respond. The second implication of velocity is that we must get better at injecting decision-making into streaming data. We have to be able to package up business rules and analytics and inject them into a data stream so that we can enrich the stream with decision answers or so that we can kick of parallel processes as the stream flows by. These require different deployment metaphors with lower latency and more state management capabilities. The growing ability of business rules management systems to integrate with event handling and the deployment of analytic models into streaming data infrastructures are just two of the developments supporting this trend. The Volume, Variety and Velocity of Big Data are going to drive more demand for Decision Management Systems, put additional pressure on those building analytics for Decision Management Systems and mean we must expect more of the technologies we use to build them. Analytics Capability Landscape Organizations today are turning to analytic capabilities to drive decision-making. But with the different types of decisions that need to be made, multiplied by the different types of analytic capabilities available, it can be difficult for organizations to choose the right capabilities for the situations at hand. Some decisions require simpler capabilities, while others require complex capabilities that need the support of a talented IT team. Additionally, organizations also need to consider the user of the analytic capabilities. While some users have the experience and skills to leverage tools that require programming and heavy analysis, others may benefit more from a simpler drag-and-drop graphical user interface. Choosing a tool that the intended user is unable to get the most from no matter how great the tool itself is means that the tool will go unused, affecting decisions across the organization. With the tools comes the ability to handle Big Data as well. No longer a buzzword, the era of Big Data means that analytic capabilities must extend to both structured and unstructured data, parsing the information to assist organizations with informed decision-making. Ultimately, any analytic capabilities used by the organization must align with business needs, be geared toward the intended user, and support decision-makers, both human and automated. To help answer these questions, Decision Management Solutions completed research in the Fall of 2014, examining the different types of analytic capabilities available to organizations and the business situations for which they are most appropriate. The complete report and associated infographic can be download free of charge from the Decision Management Solutions website at decisionmanagementsolutionsanalytics-capability-landscape Shifting the Analytic Focus Historically, the focus for most organizations has been on reporting. Recently, this has shifted to a balance of reporting and monitoring. With the growth of Big Data Analytics as a focus for companies and companies desire to become data-driven, the next 12-24 months is likely to see a shift toward decision-making as a focus. For instance, a live poll showed that three quarters of respondents were focused on reporting or monitoring today and were evenly split between the two. But fast-forward 12-24 months, and almost eighty percent of respondents are focusing on decision-making instead. A New Approach Given the wide range of analytic capabilities available, the various roles that can be involved in using these capabilities, and the range of analytic styles available, navigating the analytic landscape requires a new approach. This approach is decision-led, role-centric, and style-based. It allows an organization to navigate the analytic landscape, selecting appropriate analytic capabilities for each decision-making problem based not on the kind of analytic capability but on its fit for the intended purpose. By focusing on the kind of decision-making problem and on the role(s) involved in solving the problem, organizations can identify a suitable style of analytic capability 8211 descriptive, diagnostic, or predictive 8211 to ensure the chosen capability will be used effectively. Navigating the Analytics Capability Landscape Decision-Led As organizations shift from a focus on reporting and monitoring to one focused on decision-making, the most important thing to know about each project is which decision(s) are being targeted for improvement. Only a clear view of the decisions will allow selection of appropriate analytic capability. The characteristics of the decision to be improved are at the heart of selecting an appropriate analytic capability. The four characteristics that matter most in describing a decision at this stage are: Volume, or how often a decision is made. Repeatability, or how similar each decision is. Latency, or how long the organization has to make the decision. Complexity, or how difficult the decision is. Role-Centric Decision-making problems can be solved in a variety of ways. The people who will be involved and their roles that will play a part in improving the decision will further constrain and direct the type of analytic capability to be used. While many roles exist in organizations, they can generally be classified as one of four types: Business decision-makers. Business analysts. IT data professionals. Analytic professionals. A clear understanding of who is going to be involved in solving a decision-making problem is the next step. Style-Based With a clear understanding of the decision that is to be improved and the role(s) that will be involved, it is possible to identify the right type of analytic capability that is required. Three elements define analytic style: Interactivity: Is the capability designed for explorers or settlers Presentation: Does the capability deliver a visual result or a numeric one Scaling approach: Is the capability a DIY one or a factory-made one This analytic style will determine if the roles involved can use the capability to solve the decision-making problem at hand far more effectively than the capabilitys position on an arbitrary maturity curve. These three analytic styles can be combined in eight ways. In practice, there is overlap between the styles, but these combinations are worth considering as examples of the kind of capabilities available. Becoming an analytic organization is going to require a broad portfolio of analytic capabilities. These capabilities will need to include descriptive, diagnostic, and predictive analytic capabilities. Different capabilities do not replace each other so much as complement each other. Different problems will require different capabilities, depending on the decisions being improved and the roles involved. Making sure that capabilities can be selected from a broad portfolio will be important for long-term success. Thank you to our research sponsors FICO and OpenText. Download the complete report and infographic here . Selecting Vendors Each Decision Management System requires different subsets of the capabilities described above. The right set of vendors and products is going to vary, depending on the requirements and needs of both the project and the organization as a whole. There are many vendors large and small to choose from, with more being added every day. The products they offer have great breadth and depth of functionality in every area. Every one of the products listed continues to evolve and grow, adding new functionality and enhancing existing capabilities. Some vendors are merging to create complete product sets or suites under one umbrella while others are collaborating to allow their products to be used together more effectively. PMML already provides some standards support for this collaboration and new standards are on the horizon that will extend this. There are a wide range of vendors available for each of the product categories you need to develop Decision Management Systems. Many organizations will have existing relationships with vendors and will use other software products they provide. Experience with clients shows that familiarity and comfort with a vendor, confidence that you can work well with them, is a strong predictor of success. Some organizations will work with Systems Integrators or other service providers who have strong vendor relationships that will likewise contribute powerfully to a successful project. The fit of the vendor(s) you select with your organization is often much more important than the specifics of their functionality. All the vendors in this report have paying customers who are successfully using their technology to build some kind of Decision Management System. There is no magic or best set of vendors or products. There is a rich set of vendors and products and most, if not all organizations could pick from multiple vendors or vendor combinations and be successful. Some things to consider: When both decision logic and analytic insight must be combined in a Decision Management System, those Business Rules Management Systems and related products that are model-aware and can consume and integrate with predictive analytic models are more likely to be successful than those that are not. A Predictive Analytic Workbench that supports a range of deployment options for predictive analytic models into production and can monitor and manage these models will generally require less integration work than those that do not. Predictive Analytic Workbenches that support in-database modeling and in-database scoring (directly or through partnerships with others) are increasingly valuable. An optimization environment that supports the generation of business rules (often through integration with data mining capabilities) as well as solving to produce a set of actions will provide more deployment options. Those components that support standard platforms and provide a rich set of APIs and thin client interfaces are generally preferred. Depending on the system involved, a focus on real-time or on batch (or on a mixture of the two) will be essential as will an understanding of the need for support for, Java or legacy platforms. The view taken by the organization of open source products may constrain or focus your selection process. This lists vendors and basic company contact information alphabetically. Although every attempt has been made to validate this information, Decision Management Solutions accepts no liability for the content of this report, or for the consequences of any actions taken on the basis of the information provided. Advertisements are the responsibility of the vendor concerned and have not been reviewed by Decision Management Solutions. The vendor section has two parts. First there is a list of vendors with their contact information. Many vendors listed above have multiple products some of which meet the criteria for inclusion in the report and some of which do not. A single eligible product is enough to get the vendor listed. The table of products that follows includes listings of individual products as well as links to First Look reviews, if any, on JTonEDM. Only the most recent product First Look is linked, though previous First Looks are generally referenced when a new one is written. Some First Looks cover multiple products and in these circumstances both products are listed and linked to the same First Look. Actian Corp ACTICO GmbH Appendix 8211 Decision Management Systems Decision Management Systems are a new class of system. Decision Management Systems bring together two kinds of systemsoperational systems that manage the transactions of the business and analytic systems that help you understand how to run the business better to deliver systems that actively work to help you run your business or organization. Decision Management Systems are agile, analytic and adaptive and are built using a three step process of decision discovery, decision services and decision analysis. Decision Management Systems deliver high ROI by reducing fraud, managing risk, boosting revenue and maximizing the value of scarce resources. There is more information on Decision Management Systems, their characteristics and the process of building them in the authors book Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics (IBM Press, 2012) 8220. This book is organized into three parts. Part I: The Case for Decision Management Systems The first three chapters make the case for Decision Management Systems why they are different and how they can transform a 21st century organization. Chapter 1, Decision Management Systems are different: This chapter uses real examples of Decision Management Systems to show how they are agile, adaptive and analytic. Chapter 2, Your business is your systems: This chapter tackles the limits of manual decision-making, showing how modern organizations cannot be better than their systems. Chapter 3, Decision Management Systems transform businesses: This chapter shows that Decision Management Systems are not just different from traditional systems they represent opportunities for true business transformation. Chapter 4, Principles of Decision Management Systems: This chapter outlines the key guiding principles for building Decision Management Systems. Part II: Building Decision Management Systems Chapters 5 through 7 are the meat of the book, outling how to develop and sustain Decision Management Systems in your organization. Chapter 5, Discover and model decisions: This chapter shows how to find, describe, understand and model the critical repeatable decisions that will be at the heart of the Decision Management Systems you need. Chapter 6, Design and implement Decision Services: This chapter focuses on using the core technologies of business rules, predictive analytics and optimization to build service-oriented decision-making components. Chapter 7, Monitor and improve decisions: This chapter wraps up the how-to chapters, focusing on how to ensure that your Decision Management Systems learn and continuously improve. Part III: Enablers for Decision Management Systems The final part documents people, process and technology enablers that can help you be successful. Chapter 8, People Enablers: This chapter outlines some key people enablers for building Decision Management Systems. Chapter 9, Process Enablers: This chapter continues with process-centric enablers, ways to change your approach that will help you succeed. Chapter 10, Technology Enablers: This chapter wraps up the enablers with descriptions of the core technologies you need to build Decision Management Systems. Decision Management Systems have three critical characteristics. These characteristics strongly differentiate them from current, mainstream business applications. Most such business applications are difficult, expensive and time consuming to change. Decision Management Systems are agile and transparent. The business applications that support most organizations are entirely separate from the analytic environment of those organizations. Decision Management Systems are both operational and analytic. Finally most business applications are designed and built to meet a specific set of requirements that is known and not expected to change. Decision Management Systems are adaptive, learning and improving as they are used. Decision Management Systems are Agile Business Agility is an overused expression in corporate IT with all manner of approaches and technologies being promoted as delivering business agility in some way. Decision Management Systems are agile because the logic in them is easy to change and easy to adapt to changing circumstances. When new policies or regulations are issued the logic that implemented them can easily be found and safely be changed. These changes dont undermine compliance because Decision Management Systems are transparent it is clear how they will work in the future and also clear how they acted in each specific historical situation. This agility allows more stable business processes, as changes are easy to make to the Decision Management Systems that support those processes, and ensures that rapidly changing know-how and experience can also be effectively embedded in systems without the danger that it will become stale and out of date. Decision Management Systems are Analytic Analytics is a hot topic in many organizations today. Yet most analytic systems are completely separate from the operational systems that run the business. These analytic systems rely on data extracted from the operational systems but are otherwise quite standalone. In contrast, Decision Management Systems deeply embed analytic insight to improve their operational behavior. Analytics are used to divide customers or transactions into like groups to allow actions to be effectively targeted. Analytics are also used to make predictions of the degree of risk involved in a transaction, the likelihood of fraud or the extent and type of opportunity available. These predictions are used to select from the available alternatives in a way that will manage risk according to the organizations guidelines, reduce fraud, maximize revenue and effectively allocate resources across competing initiatives. Decision Management Systems are Adaptive Business systems, like business people, need to constantly adapt and learn. They need to experiment and see if a new approach might work better than a long established one, challenging conventional wisdom. They must manage trade-offs in an ever changing business climate. They must allow their performance to be monitored in terms of how effective the decisions they make turn out to be. In this way Decision Management Systems are adaptive, built to respond to changing conditions and to support a process of continuous improvement through testing and experimentation. Building Decision Management Systems involves many of the same techniques, tools and best practices that building any reliable, high-performance operational system involves. All the skills and experience an organization has in developing information systems apply. The new and changed activities required fall into three phases decision discovery, decision services and decision analysis. Decision Discovery Decision Management Systems are focused on automating and improving decisions. Most organizations do not have a well defined approach for finding, modeling and managing the decisions they make. To effectively build Decision Management Systems, then, the first step is to find the repeatable, non-trivial decisions in the organization that have a measurable business impact and are therefore candidates for automation and improvement. Examples of suitable decisions include checking the eligibility of a person for a government benefit or commercial product, validating that an organization can become a supplier or meets some defined criteria, pricing a loan or other financial instrument based on an assessment of the risk involved, and making an offer to a consumer to maximize the value of an opportunity to interact with them. There are a number of ways to find these decisions. They can often be found explicitly simply by interviewing and working with business experts. The tasks in business processes where choices are being made or where there is a pause for review are typically decision-making tasks. Many branches in processes are preceded by decision-making. Decisions can also be found by analyzing Key Performance Indicators and other metrics to see what choices make a difference to those metrics. It is unusual for something to be tracked as a metric if there are no choices made that cause it to go up or down. The top level decisions that these approaches find should be described, primarily by defining a question that must be answered to make the decision along with the allowed or possible answers. For instance, a claims review decision might answer the question Is this claim fraudulent and what should we do about it with allowed answers including routing it to the fraud investigators, putting it through a regular claims review or fast tracking it for immediate payment. Top level decisions can and should be decomposed into the subordinate decisions they are dependent on the smaller decisions that must be made before the top level one can be made. This decomposition is recursive and provides necessary detail on how these decisions are actually made day to day. Decision Modeling Decision modeling is a powerful technique used in Decision Discovery to capture decision requirements. Decision modeling has four steps that are performed iteratively: Identify Decisions. Identify the decisions that are the focus of the project. Describe Decisions. Describe the decisions and document how improving these decisions will impact the business objectives and metrics of the business. Specify Decision Requirements. Move beyond simple descriptions of decisions to specify detailed decision requirements. Specify the information and knowledge required to make the decisions and combine into a Decision Requirements Diagram. Decompose and Refine. Refine the requirements for these decisions using the precise yet easy to understand graphical notation of Decision Requirements Diagrams. Identify additional decisions that need to be described and specified. This process repeats until the decisions are completely specified and everyone has a clear sense of how the decisions will be made. At this point a requirements document can be generated, packaging up the decision-making requirements identified. This can act as the specification for business rules implementation work or for the development of predictive analytics. Alternatively the model can be extended with decision logic, such as decision tables, to create an executable specification of the decision-making requirements. For a detailed discussion of decision modeling with the new Decision Model Notation (DMN) standard, download our white paper 8220Decision Modeling with DMN8221. You can also read more about decision modeling in the Best Practices Section. Decision Services The decision discovery step enhances traditional analysis and requirement gathering tasks. Once the decisions are identified and modeled, an iterative process of development can begin. The objective is to develop Decision Services coherent, well defined components that make a decision for the other processes and system components in the solution. Beginning by defining simple interfaces that allow these services to be asked a question and give back one of the allowed answers, an iterative approach is used to flesh out the decision making. The decomposition of the decision shows the sequencing and structure of the decisions and the business rules, predictive analytic models and optimization models required can be developed and added to this structure. A complete Decision Service will require some combination of business rules, predictive analytic models and optimization models. Most will not require the deployment of optimization models. Optimization is more likely to be applied to a large number of similar decisions with the optimal actions identified for each decision used to derive new business rules that are more likely to result in optimal decisions in the future. Some decision services will not require predictive analytic models, especially those primarily concerned with eligibility and compliance where business rules dominate. Even when analytic insight is important to a decision, sometimes that insight can be best represented with a set of business rules mined from historical data. When probabilities are needed, however, predictive analytic models will either need to be executed by the decision service, executed in the database the decision service is using or stored in the database that the decision service used if a batch update of the prediction is acceptable. Decision Analysis Decision services are developed and deployed as part of an overall systems development effort. Once deployed they must be monitored and analyzed to see if changes are required going forward. Decision services should be monitored for both proactive and reactive changes changes that might help improve performance as well as necessary changes for compliance for instance. Performance management and other analytic tools can be used to assess the effectiveness of the decision making embedded in the system. As changes are identified and proposed it must be possible to effectively assess the impact of these changes before they are deployed. It may also be necessary or desirable to design new approaches and conduct new experiments to gather new data about what works and does not. Any changes made should be monitored to make sure they work as expected. There are many ways to make a case for a Decision Management System. The cost of development of a Decision Management System plus the additional software required to develop it must be offset by a satisfactory return if a case is to be made. The top-line benefits of a Decision Management System typically come in a number of areas: Reduced losses by eliminating fraud and waste. Predictive analytic models that establish the probability of a transaction being fraudulent or wasteful can be combined with business rules for known fraud schemes and waste prevention policies to determine which transactions to reject or to refer for investigation. Decision Management Systems have an excellent track record in dramatically reducing fraud. Reduced risk exposure and better matching of risk to price. It has been said that there is no such thing as a bad risk, only a bad price. Using predictive analytic models to predict the risk of a loan or a policy and then applying business rules to correctly price for the predicted risk is a well established use case for Decision Management Systems. Decision Management Systems can manage more fine-grained risk models, with dozens or hundreds of segments, and more closely match risk to price. Increased revenue from targeted marketing and correct opportunity identification. When an organization has an opportunity it can be unclear how to maximize its value. As opportunities migrate across channels and as windows of opportunity get narrower, this gets even harder. Decision Management Systems can apply campaign and offer rules with predictive analytic models that estimate the propensity for specific offers to be accepted and to be profitable. The offer that is most suitable, most likely to be profitable can then be made even if only a short time is available. Optimization can be used when there are constraints on how many offers can be made or on capacity, ensuring maximum return on those limited assets. Deciding on the next best action across channels ensures a focus on long term customer value and increased revenue over time. Productivity and maximized use of business assets physical and people. Decision Management Systems handle large numbers of routine decisions, freeing up people to focus on more complex, higher value tasks. They can also assign people to those tasks most likely to see a return on that investment, such as assigning collections activities based on the likely total return. These same approaches can maximize the use made of limited assets, deciding how best to handle them at each moment. Faster time to market and shorter time to respond. In many of these different areas the time it takes an organization to get to market with something or to respond to a change is critical. By making it easier and quicker to add new business rules, Decision Management Systems can improve time to market and so add additional value. This can be particularly effective as part of a legacy modernization effort where hard to change components are upgraded to Decision Management Systems for increased agility and lower management costs. Finally, when calculating the costs of a Decision Management System, it should be noted that these technologies often result in cheaper development relative to the equivalent traditional coding approaches. Decision Management Systems are also often dramatically cheaper to maintain. This both reduces the total cost of ownership of the system when maintenance costs are included and makes it more likely that any given improvement in the system (to match a competitors move or to take advantage of a fleeting opportunity) will actually be attempted. Bibliography Fish, Alan. Knowledge Automation: How to Implement Decision Management in Business Processes. New York, NY: John Wiley amp Sons, Inc, 2012. Fisher, Ronald A. The Design of Experiments, 9th Edition. Macmillan, 1971. Forgy, Charles. On the efficient implementation of production systems . Ph. D. Thesis, Carnegie-Mellon University, 1979. Nisbet, Robert, John Elder, and Gary Miner. Handbook of Statistical Analysis and Data Mining Applications . Burlington, MA: Elsevier, 2009. Taylor, James. Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics . New York, NY: IBM Press, 2012. Taylor, James, and Neil Raden. Smart (Enough) Systems: How to deliver competitive advantage by automating hidden decisions . New York, NY: Prentice Hall, 2007. This report can be freely circulated, printed and reproduced in its entirety provided no edits are made to it. If you would like to publish an extract, please contact Decision Management Solutions at infodecisionmanagementsolutions. Quotes from this report should be correctly attributed and identified as 2016, Decision Management Solutions. While every care has been taken to validate the information in this report, Decision Management Solutions accepts no liability for the content of this report, or for the consequences of any actions taken on the basis of the information provided. Decision Management Systems Platform Technologies Report Version 7, Update 4, March 25, 2016 Organizations are adopting a new class of operational systems called Decision Management Systems to meet the demands of consumers, regulators and markets because traditional systems are too inflexible, fail to learn and adapt and crucially cannot apply analytics to take advantage of big data. Decision Management Solutions conducts ongoing research on the increasingly robust technology platforms available to build this new class of system. Directed by James Taylor, CEO, this report covers Business Rules Management Systems, Predictive Analytic Workbenches and Optimization technologies used alone or in combination to build custom Decision Management Systems as well as in-database analytics and other analytic infrastructure that can be used to maximize the effectiveness of predictive analytics. This is the seventh version of this report . A new section on the Analytics Capability Landscape has been added. First Looks are also posted to JTonEDM as they are completed. The full text of the report is available online and as a PDF. You can navigate both with the Table of Contents . Each new version of the report will be made available here. Subscribe to our newsletter for report updates and more decision management news. Table of Contents Introduction Organizations are adopting a new class of operational systems called Decision Management Systems to meet the demands of consumers, regulators and markets because traditional systems are too inflexible, fail to learn and adapt and crucially cannot apply analytics to take advantage of Big Data. Decision Management Systems are agile, analytic and adaptive. They are agile so they can be rapidly changed to cope with new regulations or business conditions. They are analytic, putting an organizations data to work improving the quality and effectiveness of decisions. They are adaptive, learning from what works and what does not work to continuously improve over time. Decision Management Systems are built by focusing on the repeatable, operational decisions that impact individual transactions or customers. Once these decisions are discovered and modeled, decision services are built that embody the organizations preferred decision-making approach in operational software components. The performance of these components, and the impact of this performance on overall organizational performance, is tracked, analyzed and fed back into improving the effectiveness of decision-making. Decision Management Systems offer high ROI because they improve the management of risk and the matching of price to risk because they reduce or eliminate fraud and waste because they increase revenue by making the most of every opportunity and because they improve the utilization of constrained resources across the organization. Decision Management Systems are different from traditional enterprise applications and from business process or event-based systems. Established approaches and technologies play a role in the development of Decision Management Systems. Used alone, however, these technologies and approaches tend to deliver systems that are inflexible, static and opaque. To fulfill the promise of agile and adaptive systems that fully leverage big data, organizations will need to expand their enterprise architecture to include capabilities from the proven technologies described in this report. Tested and established in many industries, technologies suitable for developing Decision Management Systems include Business Rules Management Systems, data mining or Predictive Analytic Workbenches and Optimization suites as well as new in-database analytic infrastructure and more. Organizations will need to select those that have the capabilities they need, that demonstrate Decision Management best practices and that fit the organizations architecture and use cases. This report describes these product categories and identifies the key capabilities of these technologies. Best practices in their use and key use cases are identified and discussed. An extensive list of vendors in the market is provided and an appendix provides more detail on Decision Management Systems. If you are new to Decision Management Systems, the Appendix Decision Management Systems provides an overview of this class of systems. Also, the section on Use Cases will illustrate some potential uses of Decision Management Systems and the authors recent book, Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics (IBM Press, 2012) has a more complete list. Those familiar with Decision Management Systems and considering new technology choices as part of building their own should begin with Decision Management Systems Platform Capabilities and then move to Vendors to find candidate vendors to consider. The section on Key Characteristics as well as the section Selecting Vendors should also be helpful. Whether familiar or not with Decision Management Systems, the section Best Practices in Decision Management Systems is worth reading before any new project. This report is prompted by the growing interest by organizations in Decision Management Systems. It is always challenging to draw the boundaries around such an exciting and growing area, but for practical purposes it must be so. For this report, we are focused on platform technologies used to build custom Decision Management Systems and our goal is to be comprehensive within this scope. Many vendors have developed powerful pre-configured Decision Management Systems focused on solving specific decision problems such as loan underwriting, claims handling or cross-channel marketing. For many organizations these solutions are ideal but they are not the focus of this report. Similarly there are vendors that build custom Decision Management Systems for their customers and that have developed powerful platforms for doing so. If such a platform is not for sale to those building their own solutions then it is out of scope for this report. In both these scenarios the reports discussions of what kinds of functionality is useful, best practices and characteristics for suitable products may well be useful in the selection of vendors but some interpretation will be necessary. For instance, when evaluating pre-configured Decision Management Systems, a discussion of business rule management capabilities may well be relevant while one about connecting to data sources may not be. If you know of other products you believe should be included or have other feedback, please let us know by sending us an email infodecisionmanagementsolutions . This is the seventh version of this report. A new section on the Analytics Capability Landscape has been added. Vendors and products within the report scope will be added on an ongoing basis. First Looks are also posted to jtonedm as they are completed. This report can be freely circulated, printed and reproduced in its entirety provided no edits are made to it. Please email infodecisionmanagementsolutions if you would like to publish an extract. Quotes from this report should be correctly attributed and identified as 2015, Decision Management Solutions. While every care has been taken to validate the information in this report, Decision Management Solutions accepts no liability for the content of this report, or for the consequences of any actions taken on the basis of the information provided. Decision Management Systems Platform Capabilities Four aspects of building a Decision Management System drive organizations to adopt new, Decision Management System specific, technologies: Managing decision logic for transparency and agility Embedding predictive analytics for analytic decision-making Optimizing results given real-world trade-offs and simulating results Monitoring and improving decision-making over time In this section we will introduce these four capabilities and put them in a broader context. Subsequent sections will describe the capabilities in more detail. Managing Decision Logic Like all information systems, Decision Management Systems require the definition of the logic to be applied during operations. In Decision Management Systems this logic is primarily that of decision-making how a particular decision should be made given the systems understanding of the current situation. Decision Management Systems must be more agile than traditional information systems, however, so this logic cannot be managed as code. The use of code to define decision-making logic makes that logic opaque to those on the business side that understand how the decision should be made. It also makes it hard to record exactly how a decision was made as recording exactly what code was executed is often problematic. To manage logic in this way most organizations will adopt a Business Rules Management System or a product that contains equivalent functionality. Decision Management Systems require that decision logic is managed in a way that delivers design transparency, so it is clear how the decision will be made, and execution transparency, so it is clear how each specific decision was made. Embedding Predictive Analytics The management of decision logic is a foundational capability for Decision Management Systems. Most Decision Management Systems should also take advantage of the information available to an organization to improve the accuracy and effectiveness of each decision. Unlike human decision-makers, Decision Management Systems cannot use visualization and reporting technologies to understand the available information. In addition, while people have a great ability to extrapolate from information about the past to see what might happen in the future, systems treat data very literally. To maximize the value of available information in terms of improved decision-making, Decision Management Systems must therefore embed predictive analytic models derived from historical data using mathematical techniques. Such models make assessments of the likelihood that something will be true in the future and make this assessment available to the decision logic in a Decision Management System, allowing decisions to be made in this context. This shift from presenting data to humans, so that they can derive insight from it, to explicitly embedding analytic insight in systems using predictive analytic techniques means that organizations will need to adopt additional technologies to analyze their data. Specifically they will need to adopt a Predictive Analytic Workbench or equivalent functionality. They may also choose to adopt additional analytic infrastructure. Optimization and Simulation Many decisions rely on resources that are not unlimited. Whether these resources are staff, product inventory or service capacity, decisions must often be made in the context of a constrained set of resources. Organizations will generally want to optimize their results given these constraints and this means that trade-offs must be made. Organizations will need to adopt optimization and simulation technologies to manage trade-offs and to ensure that decisions are made in a way that produces the best possible results given the constraints on decision-making. These technologies allow modeling of the constraints and trade-offs and then use mathematical techniques to pick the set of outcomes that will maximize the benefit to an organization. These models can also be used to drive simulations of various scenarios to see which will produce the best outcome for the organization. An organization that is established in developing Decision Management Systems will ultimately adopt technologies for all these capabilities decision logic management, predictive analytic insight and optimization and simulation. Some will find it useful to have more than one product with the same kind of capability, some will standardize on a single product. The products do not need to be adopted all at once and some Decision Management Systems require only some of the capabilities. Monitoring Decisions The nature of decision-making is that it is often not possible to tell how good a decision will turn out to be for some time. As a result the ongoing monitoring of decisions made and their outcomes is really important. Such monitoring allows for decision-making to be systematically improved over time both by tracking decision performance and making changes when this performance is inadequate and by conducting experiments and analyzing the results of these experiments. Most organizations will find that they will use their existing Performance Management and data infrastructure to conduct much of this analysis. However, the use of the decision logic and predictive analytic capabilities discussed above will also be necessary. This will allow for the explicit logging of decision-making approaches and outcomes as well as allowing for easy management of experiments in decision-making. In general this ongoing decision analysis requires design decisions and integration with existing infrastructure rather than additional technologies. These capabilities come together in an overall platform for building Decision Management Systems as shown in Figure 1: The capabilities of a Platform for Decision Management Systems below. Decision Logic capabilities allow for the editing of the business rules that represent the decision logic. These business rules are deployed to a Decision Service for execution. Predictive Analytic capabilities allow for data to be analyzed and turned into either additional business rules (representing what has worked in the past and is likely to work in the future) or predictive analytic models that can be deployed either to a Decision Service or to the operational datastore being used by the Decision Service. Figure 1: The capabilities of a Platform for Decision Management Systems Simulation and optimization capabilities are used to manage tradeoffs and constraints and can result in business rules that have been optimized, optimization models that can be solved in a Decision Service, or an explicit set of actions to be taken that can be pushed into an operational datastore to drive behavior. All three sets of capabilities rely on data infrastructure to deliver test and historical data while the predictive analytic capabilities can take advantage also of in-database modeling and scoring. The Decision Service itself can execute business rules, score records using predictive analytic models, solve constraint optimization problems and potentially tune predictive analytic models to improve their predictive power while in use. All the capabilities can consume actuals information generated by a Decision Services ability to log its decision making. Vendors with all these capabilities could produce a product that provides all four capabilities in a single, integrated environment. However, because the capabilities discussed above can be used for more than building Decision Management Systems, it is likely that some vendors will continue to package up each capability as a separate product while integrating them ever more tightly to make it easier to use them as a set. Other vendors will remain focused on a specific area of capability and will work with partners and standards organizations to ensure that other capabilities can be integrated with theirs. What organizations will need to build Decision Management Systems is an ability to manage decision logic, create and embed predictive analytic insight and simulateoptimize outcomes. How best to assemble this functionality will be different for different organizations. All the various deployment capabilities described above result in code or packages of definition being deployed to a Decision Service execution environment. This is typically a conceptual environment being in practice made up of elements of multiple products. This environment needs to be able to execute various elements, typically when invoked using a standard API. The decision itself is made by executing generated code on the underlying platform, business rules on a deployed business rule engine, optimization models on a solver and predictive analytic models on a model execution engine. Decision Services also need to be able to log what happened each time a decision was made which rules fired, what model scores were calculated and which outcomes were selected by the optimization model. Again this logging often involves elements of multiple products but conceptually a single log can be generated. Finally model tuning may be available in the decision service, with a piece of analytic modeling code deployed to monitor the performance of deployed models and conduct experiments to see how the predictive performance of those models may be improved. These technologies are used to develop and manage the decision logic, predictive analytic insight and optimization models required by a Decision Management System and to deploy this to a Decision Service. Decision Services operate in a broader enterprise IT context, however, as shown in Figure 2: Decision Services and Decision Analysis in a broader architectural context below. Decision Services are invoked for decision-making in an application context. This application context is increasingly a process being managed by a Business Process Management System. Decision Services can also be invoked by enterprise applications both packaged and legacy. While this is less common than invocation from a business process it is by no means an uncommon pattern. A further growing pattern is the use of a Decision Service to support an event processing context, making a decision in response to a pattern of business events and then kicking off a business process or other service as a result. In each scenario, the application context fulfills an overall business need and the invoked Decision Service improves effectiveness and efficiency. Decision Services are not stand-alone systems that run on specialist hardware or unique platforms. Instead they run on the standard enterprise platforms in use today. Different technologies support the various application servers, service-oriented platforms and programming metaphors that are common and Decision Services can be developed that run on any such platform. Decision Services rely also on a modern data infrastructure. This data infrastructure supplies operational data to the Decision Services and can also provide in-database analytic scoring. Business intelligence capabilities typically use this same data infrastructure provide insight to human decision makers. While this is not needed to develop Decision Services, business intelligence often complements these systems by supporting the handling of exceptions. Figure 2: Decision Services and Decision Analysis in a broader architectural context Decision Services may also use business intelligence infrastructure to provide additional context when returning a number of options. Finally the performance of Decision Services must be monitored to support continuous improvement. This ongoing decision analysis requires both capabilities within the Decision Service, such as support for both a champion and a challenger approach to a single decision, and integration with more typical business or corporate performance management capabilities such as dashboards and alerting systems. Multiple product categories exist in the market for decision logic management, predictive analytic modeling, optimization and simulation. As Figure 3: Overlapping product categories below shows, these product categories often overlap. Figure 3: Overlapping product categories While there are many Business Rules Management Systems that just manage decision logic, there are also products that combine the management of decision logic with optimization or with building predictive analytic models. There are also products that are called Decision Managers or Business Decision Management Systems that manage decision logic and other Decision Management products that manage decision logic and build predictive analytic models. Some Predictive Analytic Workbenches include in-database scoring capabilities, some package this separately while model monitoring and tuning is similarly sometimes packaged separately. Navigating products for managing decision logic When considering products for managing decision logic, there are two main areas of potential confusionis decision logic the primary focus and is the product rules-centric or decision-centric Decision Logic as a primary focus. The first is the degree to which the product is explicitly focused on managing all the logic for a potentially complex decision rather than managing some decision logic so that it can be combined with some analytic insight. For instance a number of products focused primarily on building and deploying analytic models also allow you to manage some business rules. These are typically either focused on eligibility or cut-offs. Eligibility rules might select a subset of all possible records in a set before applying analytic models to them or determine that only certain outcomes are allowed for a given record regardless of what the model may predict. Cut off rules generally turn predictive scores into simple actions based on clearly defined values. Such capabilities are much to be desired in analytic products but they will not allow an organization to manage the number of business rules involved, for instance, in complex eligibility decisions. Because they assume that analytics are at the core of a decision they are also not they likely to be effective if used to manage decision logic for decisions driven entirely by policy, regulation and best practice that therefore have no analytic component. For these decisions a product that is either primarily focused on decision logic or that regards explicit logic and analytic insight as peers in decision-making will be more appropriate. Such products are more likely to be referred to as a Business Rules Management System or a Decision Manager. Some Decision Management platforms treat the two as peers while products focused on Analytical Decision Management are more likely to be focused first and foremost on analytics. Rules-centric or decision-centric. The second is a more nuanced consideration. Some products for managing decision logic began as expert systems or focused on the management of business rules. While users of these systems always used them to automate decisions, this was often implicit in the deployment of the rules rather than explicit in the design. Other products have begun with a focus on business process and added business rules capability and similarly evolved towards a more decision-centric focus. Such products used to refer to themselves as business rules engines then as Business Rules Management Systems and now, increasingly, as Decision Management products. In contrast other tools have begun with an explicit focus on decisions. These typically allow a decision to be documented as such, including its inputs and outputs, before the logic of the decision is specified to fill the gap between inputs and outputs. These have always used decision in their names and are often called Decision Managers or use Decision Management in their names. Some, from companies with a strong analytic focus, might refer to Decision Analytics also. This focus can be just a naming thing, with equivalent products having different names, but it can also reflect a subtle yet important emphasis on decisions over business rules as the primary organizing principle of a product. Navigating products for developing analytic insight In many ways the products that offer support for the development of analytic insight are more straightforward. Most such products are described either as Data Mining Workbenches or Predictive Analytic Workbenches. These are often easy to compare with each other and offer broadly comparable capabilities. Some such products are narrowly focused, offering a small number of analytic algorithms or support for a particular kind of data. More common are workbenches that support a wide range of such techniques and data sources. The only areas of potential confusion come in the support of model management and in-database analytics and the potential support of some general purpose decision management platforms for limited development of analytic insight. Model management. Some analytic products include capabilities for model management in the same product that is used to build analytic models, some package it up as a separate capability. There is also a small group of products designed explicitly for model management. There is no particular advantage or disadvantage to the approaches though having a more web-based and less analytic professional-oriented environment for model management can be appealing. Some of these model management capabilities support models built using a variety of analytic tools. This support for monitoring and managing models that were built using multiple tools is a significant differentiator, regardless of whether it is packaged with an ability to build models or not. In-database analytics. Similarly some analytic workbenches package up support for in-database analytics (either development or deployment or both) with the workbench while others sell it as a separate capability. When considering a workbench from a functional perspective, what matters is the tool8217s support for the databases in use at your organization and the depth of integration. Packaging may affect pricing but it generally does not affect capability. Decision-centric analytics. Some products that are primarily focused on managing decision logic provide capabilities for developing analytic insight. Some offer what can be described as data mining for business rules, allowing data mining algorithms that produce decision trees or association rules to be used within the tool to find suitable rules from historical data. Some offer data mining algorithms integrated with a decision tree editor for data-driven strategy design. Both such capabilities are highly desirable and the use of data mining to find business rules is a clear best practice (discussed in the section on analytic and it cooperation). Nevertheless these tools do not offer the same range of analytic insight capabilities as a specialist tool. Some of these platforms go a little further and offer automated analytic model building capabilities also. These start to compete more directly with pure-play analytic workbenches, especially for organizations focused on decisions outside of regulated credit industries that are comfortable with automated modeling approaches. Most users of these tools will still find at least occasional reasons to use a pure play analytic workbench however. Navigating products for optimization The big differences between optimization products are a solution focus versus a tool focus and the degree of tooling available. Solution or tool focus. Because optimization can be complex to configure and use, many organizations adopt optimization technology as part of a solution. In this approach the optimization model is pre-configured with integrated reporting and simulation interfaces focused on the solution. These might address a scheduling problem, supply chain issues or product configuration. In contrast a tool focus means delivering an optimization product that can be used to solve any problem but that must be configured before it does anything. Because many of the pre-configured solutions are provided built on a specific tool, organizations can often begin with a pre-configured solution and then expand usage by also acquiring the underlying tool. For some organizations, however, there is only one problem that seems to justify optimization and they are likely to be happy with a single solution-focused offering Solver or workbench. Some optimization products are really just a set of solvers with well defined APIs while others offer a complete workbench with debugging tools and graphical interfaces. The solver-only approach allows the tool developer to focus on performance and scalability while supporting practitioners who want to use a particular problem definition language or editing environment. A more complete workbench tends to be more supportive of less technical users and to involve less work to set up at the expense of being somewhat more limited in terms of how a practitioner can approach defining the problem. Managing Decision Logic The first requirement is for a complete set of software components for the creation, testing, management, deployment and ongoing maintenance of the logic of a decisionthe business rulesin a production operational environment. The most common product category name for this capability is a Business Rules Management System. For the purposes of this report we are concerned only with executable logic, executable business rules that is with business rules defined at a level that allows them to be executed in a computer system. Business rules may be defined and managed as a requirements approach and to ensure consistency and accuracy in manual decision-making, but this is not the focus of this report. Generally, an executable business rule is simply a statement of what action should be taken if a given set of conditions are true. Each rule has a conditional element that can be assessed at a moment in time to see if it is true or false as well as one or more actions to take if it is true. These actions could be as diverse as sending emails or invoking functions but generally involve setting data values. In most Business Rules Management Systems each rule can also have an owner, notes, version history and other metadata that describes it. Managed, executable business rules offer many advantages over traditional code, especially when automating and managing decisions: Business rules are easier for non-technical business experts to read, improving businessIT collaboration and improving the accuracy of business rules relative to code. This is especially true because business rules can also be represented in a variety of graphical and tabular metaphors. Business rules are declarative, allowing each to be managed independently and so simplifying the management and reuse of decision-making logic while also allowing more precise and granular assessment of consistency, completeness and quality. Business rules either fire (the conditional element evaluates to true) for a particular transaction or they do not. This can (and should) be recorded each time a decision is made and represents a precise description of how a decision was made. This supports subsequent analysis and improvement of decision-making. A Business Rules Management System or equivalent functionality gives business users and analysts the ability to make routine changes and updates to critical business systems while freeing IT resources to concentrate on higher value-add projects and initiatives. Even when used by an IT organization in a more traditional way, a Business Rules Management System allows for more rapid change by making it easier to find, make and test changes to decision-making logic. Managing decision logic requires software that supports a range of activities: Integration with other applications and services and linking business rules to data sources so that business rules can be developed that will use the data available in existing systems and processes. The development and testing of business rules by both technical and non-technical users so that all those involved in defining a decision can participate in writing the business rules. Identification of rule conflicts, consistency problems, quality issues and more for both technical and non-technical users so that full advantage is taken of the declarative nature of business rules. Assessment of the business impact of changes to the business rules through simulation and reporting to ensure the right changes are being made and to understand the business consequences of changes that must be made. Deployment of a defined package of business rules to Decision Services in different computing environments. Measuring and reporting of decision and business rule effectiveness based on the results of executing business rules in decision services. Such a system requires the following capabilities. In a future release of the report a set of specific items to look for in each category will be identified. A business rule management environment suitable at least for technical users is essential. This environment typically also includes design tools to integrate the deployed business rules with the rest of the enterprise computing environment. Generally this is provided as part of an Integrated Development Environment or IDE, often one based on Eclipse or Visual Studio. Technical users are generally not the only ones who will need to edit business rules. Interfaces to allow business analysts and business users to manage business rules directly and in-context, or tools to allow such interfaces to be built and maintained, are critical elements of a robust approach to managing decision logic. These interfaces could be part of an IDE, though this is less common, and a thin-client interface is more likely. Some products provide editing environments for non-technical users based on the Microsoft Office products, specifically Microsoft Word and Microsoft Excel. A variety of metaphors are often used to author business rules. A rule flow or decision flow is used to lay out multiple steps within a decision. Business rules can be specified for each of the steps or tasks in such a flow as a decision tree, decision table, rule sheet, decision graph, decision model, rule family or simply as a list of independent rules. The differences between these metaphors and the value of each will be discussed in a future version of the report. Verification and validation tools that check business rules for completeness, consistency and logical errors help ensure that valid business rules are being written. Such tools should be suitable for both technical and business users to use and should be integrated with the various editing interfaces provided. These tools should ensure that the business rules being authored are at least potentially valid. They cannot tell if the business rules are the right ones for the business or if they handle every business scenario but they can tell if they are structurally and logically complete and that they handle known variations in data such as lists of values. Testing and test management tools that support unit, system and acceptance testing are a necessity. While there are circumstances in which business rules change so rapidly that formal testing is not part of the release cycle, most organizations will still have a set of tests they wish to run before allowing a new set of business rules to be deployed. Managing these tests should be straightforward. Business rules must sometimes be tested with other new components in the context of a broader application deployment and being able to test the business rules in this context is useful. Many products support integration with open test management standards such as xUnit. Technical users, and ideally less technical ones, should also be able to debug business rules. They should be able to walk through the business rules executing in a decision to see what happens in specific cases. This may be supported only for a local test environment or for both local and production environments. Impact analysis and business simulation tools to allow non-technical users to see the impact of a set of rule changes on their business outcomes are an increasingly important part of managing decision logic. Business analysts and business users will not generally be willing to make changes to business rules unless they can see what impact a change will have. Similarly when a change must be made to the business rules, due to a regulatory or policy change for instance, business users will want to see the likely impact of this change. The results must be presented in business terms to be useful an increase in profitability, a reduction in fraud, etc. These facilities may be provided as a batch tool for running historical or sample data through a set of business rules or as a more interactive tool allowing a business user to select the data they care about and running new or changed rules against that data. The best practice is clearly moving this closer to the editing of the rules themselves with the potential business impact of a change being shown automatically as a change is made in the editing tools. Decision logic must be integrated with the data that will be available when the business rules are deployed. It needs to provide tools that at least allow technical users to integrate the business rules with the organizations data. In addition it is useful for a product to be able to bring in large amounts of historical data as well as large test datasets to support effective testing and impact analysis. A set of deployment tools that support the deployment of a set of business rules either as executable code or as a package that can be executed by a high performance Business Rules Engine, ideally on multiple enterprise platforms, is required. One point of confusion is the difference between a Business Rule Engine and a Business Rules Management System. A Business Rule Engine can be part of a complete system for handling all the things involved in working with business rules. It is clearly an important part, but it deals only with execution. It determines which business rules need to be executed in what order. A Business Rules Management System is concerned with a lot more. Business rules can be executed in a number of different ways once deployed. Some Business Rules Management Systems support inferencing execution. Based on various algorithms, many derived from the original RETE algorithm these determine the correct execution sequence based on the structure of the business rules and the data available when they must be evaluated. As business rules fire and change data the engine reassesses which business rules might need to be fired next. While there are some scenarios that are very difficult or even impossible to handle without inferencing support, they are not common. The key advantages of inferencing in normal use are that it allows the business rules to be written in any order and that it ensures business rules are re-evaluated when the data used in their conditions changes. Business rules can also be executed in a sequential way, using the order specified for the business rules at design time. In many scenarios, especially when most business rules in a set will be executed for most transactions, this approach is faster. It also allows business rules to generate code, which can result in smaller and more portable deployments. Finally a number of products offer designed execution where the rules are executed sequentially but the order is determined by automated analysis of the business rules at deployment time. This simplifies execution but allows business rules to be written and edited in any order without any unexpected impacts on their behavior as the deployment time analysis will sequence the new and changed business rules appropriately. For most business scenarios all these approaches work well. Each approach has its own set of best practices in business rule writing. Last, but by no means least, products should offer an enterprise-class repository for storing and managing business rules. This repository may be a complete decision management repository that also stores predictive analytic models and optimization models. It is more likely to be one that only manages business rules. It should provide access control and security, audit trails for changes made to the business rules and versioning at a number of levels. An extensible repository that allows additional information to be added as well as an API for repository access can improve the integration of the product with other enterprise components. Some products provide integration with source code control systems, allowing business rules to be stored and managed alongside code used in the rest of the application. Embedding Predictive Analytics Embedding predictive analytics requires a software component for the creation, validation, management, deployment and ongoing re-building of predictive analytic models. Such a Predictive Analytics Workbench allows a data miner, data scientist, analytics professional or business analyst to explore historical data and use various mathematical techniques to identify and model potentially useful patterns in that data. For the purposes of this report we are not concerned with the use of data mining or predictive analytic workbenches for one-off research projects to answer a specific question or with the construction of statistical models per se. Only models that can be applied to a specific transaction or item to classify it or make a prediction about it are included. Other forms of data mining and predictive analytics can have tremendous value to an organization but they are not relevant to this discussion of Decision Management Systems. The predictive analytic models created can predict a binary outcome (yes or no), provide a number (often representing a probability or ranking of likelihood) or a selection from a list (of products for instance). They might also cluster or group based on likelihoods and may identify what item is associated with what other items. Data mining and predictive analytics allow organizations to turn historical data into useful, actionable analytic insight. Data mining and predictive analytic models are often grouped with business intelligence, reporting and visualization under the general term analytics. Data mining and predictive analytics differ from business intelligence capabilities in a number of ways: They are focused on extracting meaning about the likely future rather than summarizing or understanding the past they use historical data to make predictions about what is likely in the future. They are probabilistic rather than definitive in that they rarely if ever make a prediction that something concrete is definitely going to happen. Generally they say how likely something is, make a prediction with a certain degree of confidence, or rank order a set of possible outcomes from most to least likely. Rather than relying on the visual processing power of humans to see patterns in data, they rely on mathematical algorithms to explicitly extract these patterns from the data. This last point has an important consequence for predictive analytic workbench products being used to develop Decision Management Systems. These products must do more than simply define the right mathematical models. Presenting the results of a predictive analytic project as mathematics or even as visualizations and reports is not sufficient. It must be possible to use the product to both produce an effective predictive analytic model and embed such a model into an operational system. Unless the predictive analytic models produced can be effectively embedded they will not be useful for Decision Management Systems. A predictive analytic workbench needs to support a range of activities that are generally performed in a highly iterative way: Integration with a wide range of data sources so that data can be brought into a modeling environment for analysis. These data sources might be systems that are internal to the organization or external data. Increasingly these sources go beyond traditional relational data sources to unstructured and semi-structured data. Cleaning, integration, summarization and exploration of this data including sampling, identifying outliers, providing distribution statistics and more. The creation of an analytical dataset suitable for analysis including identifying and creating potentially useful derived variables, and managing very large datasets with thousands of attributes (both original and derived). Automated or mostly automated analysis of very large numbers of records using a variety of algorithms such as classification, decision trees, linear and logistic regression, clustering, neural networks, nearest neighbor and more. Increasingly the use of ensemble methods, where multiple techniques are applied in combination, must also be supported. Creation of analytic representations, models, based on this analysis such as predictive scorecards, functions or business rules. Validation of these models to prove they will be predictive with data not used to build them as well as assessment of their effectiveness in making predictions. Deployment of these models into an execution environment or as code that can be independently executed. The definition and management of repeatable processes or workflows to handle all these steps so that they can be repeated with new data, as part of assessing multiple possible approaches or with minor edits as the user evolves their approach. One of the most important facets of these kinds of workbenches is their support for an industrial scale process for building predictive analytic models. Predictive analytic model building used to be something of a cottage industry, with each modeler making their own choices for scripting language and a largely manual process. This approach relies heavily on the skills of the modeler and is hard to scale. With organizations increasingly needing dozens or hundreds of models, a more industrial process is called for. This does not eliminate the skill of a modeler, but it does require more repeatability, automation and scalability in the way predictive analytic models are built and managed. This is where a predictive analytic workbench is essential. A predictive analytics workbench gives data miners and possibly business analysts the ability to derive useful probabilities about the future from potentially large amounts of data about the past. These probabilities may group or segment customers or other records, identify the propensity of someone to do something (buy, churn, respond, visit), determine the strength of an association between two records or identify what is likely to be the best combination among many possible ones. Embedding predictive analytics requires the following capabilities. In a future release of the report a set of specific items to look for in each category will be identified. Predictive analytic models are typically built from a large amount of data, often pulled from multiple data sources. A predictive analytic workbench must be able to connect to and retrieve information from a variety of structured and unstructured data sources as well as flat files of various kinds. The data available is often not immediately suitable for the construction of predictive analytic models. A predictive analytic workbench provides a variety of tools to allow the clean up and integration of data prior to modeling. These tools include renaming and re-categorizing data fields, imputing missing values, filtering outliers, extracting samples and transforming data to make it more suitable for modeling. The end result of this data preparation work is what is often called an analytical dataset a large set of data attributes (some original, some derived) with any hierarchical structure flattened into a single list of attributes. Modeling efforts typically begin with exploration of the data available to develop some understanding of the data and of the patterns in that data. A rich set of visualization and graphical tools as well as statistical analysis routines help find the hidden patterns and relationships that might drive an effective model. These tools are often used in conjunction with the data preparation tools so that problems found in graphing the data, for instance, can be corrected in a data preparation routine. The same visualization and analysis tools will also be used to assess model outcomes once models have been developed. At the core of a predictive analytics workbench is a model creation environment suitable at least for data miners and other analytic users. The modeling environment might also allow business analysts to create and manage the modeling process typically through a combination of automation and simplified interfaces. Some predictive analytic workbenches are designed for expert users. Some are primarily aimed at these experts but provide simplified interfaces that aim at a broader audience. Some are designed with a single environment that works for both expert and less expert users. While the style of interface and its expectations can vary, all these workbenches create predictive analytic models and related resources in some form of shared repository. The modeling environment typically involves laying out a series of steps that will result in the construction of a model or models that can be evaluated for eperformance. Steps will include data preparation and analysis as well as the execution of one or more algorithms from an extensive set. Algorithms supported include clustering, association, linear and logistic regression, decision trees, support vector machines, Bayesian modeling and nearest neighbor techniques to name a few. It is increasingly common to find ensemble models where several techniques are applied, or one technique is applied with different parameters, and the results aggregated in some fashion to create a single, overall ensemble model. Some predictive analytic workbenches can take advantage of in-database modeling engines that can handle some of the data preparation tasks as well as execute the modeling algorithms themselves on the database server that contains the data being analyzed. This improves performance by eliminating the need to move data from the database to a separate analytic server and takes advantage of the increasingly powerful servers supporting data infrastructure. Regardless of which technique or set of techniques is used, model performance assessment and comparison tools are used to see how well a model performs. Different models can be compared and tools such as lift curves (comparing selection using a model to a random distribution) used to see how effective the model would be in production. These tools typically use new data (data that was not used to build the model) to see how predictive the model would be once deployed. Once the final model or models have been identified they must be deployed. A predictive analytic workbench may allow multiple approaches to deployment: Models can be used to score data in a batch mode, applying the results back to the database that contained the data from which the model is built. Some predictive analytic workbenches can act as a real-time scoring server using their own scoring engine and providing a web services or other API to allow it be called during decision-making. Scoring code can also be generated (as C or Java, as SQL or as business rules) so that it can be deployed to a Decision Service for real-time scoring. In-database scoring is also available, with the definition of the model being pushed to the analytic infrastructure where the scoring engine is running. A number of predictive analytic workbenches also allow models to be generated using the Predictive Model Markup Language (PMML), allowing the model to be executed by any business rules or scoring engine that supports this standard. Models are built from a snapshot of data. As such they age as time passes the data being fed into the deployed model may look less and less like the data from which it was built. A predictive analytic workbench needs tools to monitor deployed models to see how their performance is varying over time and to identify variations in performance or in data distributions. Many new models are initially deployed to challenge an existing model and the performance of both the original champion model and the new challenger model need to be compared to see if the challenger is good enough to replace the champion. Model monitoring tools need to identify opportunities to refresh and retrain models and to provide tools to make it easy for users to rebuild models to take advantage of new data. Some predictive analytic workbenches provide components for automated model tuning and updating. These machine learning techniques monitor the performance of a model as it is used in deployment and automatically adjust its underlying equation based on that performance. Some of these environments can start with no model and gradually build a predictive model based on the results of random experiments while others are designed to be used with pre-defined models. Model Tuning can be left to run forever or it can tune the model within defined boundaries and flag a model for re-building if its performance starts to drift outside those boundaries. Model Tuning capabilities are often deployed in a Decision Service if that is where the model is being executed. A predictive analytics workbench should offer an enterprise-class repository for storing and managing predictive analytic models. This repository may be a complete decision management repository that also stores business rules and optimization models. It should provide access control and security, audit trails for changes made to models and versioning. There is a growing category of software products that allow business rules to be specified and managed alongside predictive analytic models built in the same product. The degree to which large numbers of business rules can be managed and the range of predictive analytic models that can be built varies and such a combined product may not therefore support the complexity required for a specific Decision Management System. These products typically allow models built in other predictive analytic workbenches to be integrated also. In-database analytics can mean exactly thatanalytic capabilities embedded in a relational or columnar database. The phrase is also used to describe analytic capabilities embedded in data warehouse software, in data appliances and increasingly in Hadoop clusters. In-database analytic capability is delivered as a set of libraries, User Defined Functions, that deliver analytic or data mining functions such that they can: Access the data in the database, data warehouse, appliance or Hadoop file system in situ, without needing to extract it to some interim format. Directly use the memory, parallel processing capabilities and load balancingprocessor management of the data infrastructure. Be accessed both from specialist analytic tools (for model creation or data quality tasks for instance) and from operational systems. In-database analytic capabilities are specific to a particular database, data warehouse, data appliance or Hadoop distribution. Many vendors offer support for multiple data infrastructure platforms. Some capabilities are provided by the data infrastructure vendors, some by specialty analytic vendors, and some through partnerships between analytic and data infrastructure vendors. In-Database Analytic Capabilities For Decision Management Systems, the core capabilities to look for today in an in-database analytic product are: In-database data preparation and quality Data preparation, integration and cleaning often consumes 60-70 of the time and effort on an analytic project. In a traditional approach, data is extracted from the data infrastructure in which it is stored, processed through various preparation steps and then presented to the analytic modeling algorithms that need it. With in-database capabilities, however, these steps all execute in-database. This means the original data is not extracted from the database but is processed in situ. The resulting cleaned and transformed data may be stored in the data infrastructure or passed out to a predictive analytic workbench for further processing. The net is that data required for analytic modeling is transformed in-database In-database model development In-database model development allows predictive analytic models to be developed using algorithms embedded in the data infrastructure. These algorithms access tables and views directly to get the data they need, process the data using the data infrastructures processing capabilities, and create a predictive analytic model. This model may be stored in the data infrastructure for in-database scoring or it may be passed out for use elsewhere. These capabilities may be integrated with an external predictive analytic workbench.. In-database model deployment and scoring In-database model deployment and scoring infrastructure takes models developed using some combination of in-database modeling infrastructure and a predictive analytic workbench and executes them in an operational datastore so they are available to operational systems accessing that datastore. This generally involves turning models into UDFs or stored procedures that can be called using SQL and that take database fields as input. In the future, more extensive support for analytic model management and for wrapping analytics in business rules for in-database decision-making will become increasingly important. The ROI of in-database analytics As with any product, a return on investment can come from increased revenue or decreased costs. Predictive analytics often add top-line revenue by boosting sales or driving fraud out. These kinds of returns are due to the use of predictive analytics in general rather than the use of in-database analytics specifically. Nevertheless in-database analytics offer an ROI both by increasing value (though speed to market, improved accuracy and increased accessibility) and by decreasing costs. Speed to MarketThe key to deriving ROI from in-database analytics is a dramatic increase in speed to market. Using in-database analytics can result in a 10-100x overall reduction in time from when a team starts to when decisions are being made more analytically in a decision management system. Improved AccuracyPredictive analytic models developed in-database might be more accurate than those developed more traditionally. Increased AccessibilityIt is likely that the resulting analytics will be more accessible and so more likely to be used in more places, increasing their reach. Lower costPrimarily from less hardware and improved utilization. These are all benefits of in-database analytic technology widely available today. Increasingly integrated model management allows for easier monitoring and managing of deployed models, adding further value. Longer term the possible deployment of a complete decisionbusiness rules and predictive analytic modelsin the database will increase this value significantly by making analytic decision-making pervasive throughout the data infrastructure. Additional material on in-database analytics: Download the In-Database Analytics Thought Leadership Paper sponsored by SAS here . Standards play a central role in creating an ecosystem that supports current and future needs for broad, real-time use of predictive analytics in an era of Big Data. There is a move to real-time scoring, calculating the value of predictive analytic models when they are needed rather than looking for them in a database. At the same time the variety of model execution platforms has expanded with in-database execution, columnar and in-memory databases as well as MapReduce-based execution becoming increasingly common. Modeling too has changed with the open source analytic modeling language R becoming extremely popular. The range of data types being used in models has expanded along with the approaches used for storage. This increasingly complex and multi-vendor environment has increased the value of standards, both published standards and open source standards. The explosion of interest in predictive analytics has put a premium on standard approaches that will allow the ecosystem to expand to meet demand, especially R. Big Data, driven by increased digitization and the Internet, is commonly described as 8220the 3 Vs8221 of Volume, Variety and Velocity. Open source technology has evolved to meet this demand by providing a collection of highly scalable approaches to storing and managing data under the Hadoop label. The role of this open source stack in the predictive analytics market is evolving rapidly as the need to bring this data into the predictive analytics mainstream has grown. New technologies for data storage combined with this growth of Hadoop have put a premium on approaches that allow predictive analytics to be built and executed in a wide variety of platforms, increasing the interest in PMMLthe Predictive Model Markup Language. These three standardsR, PMML and Hadoopare increasingly important in predictive analytics. R is fundamentally an interpreted language for statistical computing and for the graphical display of results associated with these statistics. Highly extensible, it is available as free and open source software. The core environment provides standard programming capabilities as well as specialized capabilities for data ingestion, data handling, mathematical analysis and visualization. The core contains support for linear and generalized linear models, nonlinear regression, time series, clustering, smoothing and more. The biggest opportunity for R is the number of people using it. It is widely used in academic programs and in not for profit and government projects. As more professionals see analytics in their future, R is also appealing as a tool to learn with. R usage has risen steadily in the Rexer Analytic Survey every year since the survey first started asking about it. In 2013 70 of respondents now report using it while 24 say it is their primary tool. In addition the number of R algorithms available is huge with over 5,300 packages that extend R in some wayit is hard to imagine an algorithm that is not available for R. R is an open source project, however, and many companies will need commercial support and training services to succeed. In addition parallelism, scalability and performance are an issue, particularly of the base algorithms. Commercial vendors are mitigating this by providing their own implementations. Tooling is also an issue with the basic R environment being script-based. Finally deployment into production is technically complex if using only the base product. Organizations should make R part of their predictive analytics adoption and roll out strategy. Even for organizations already committed to a commercial platform it makes sense to take advantage of R at some level and organizations should explore their platforms support for integrating R. Plan on working with a commercial vendor that has a solid plan for R in terms of providing scalable implementations of the algorithms and either a better development environment or integration with graphical modeling tools. Hadoop consists of two core elementsthe Hadoop Distributed file System or HDFS and the MapReduce programming framework. An open source project, Hadoop development started in 2004 inspired by earlier work at Google, and became an official top-level Apache project in 2008. HDFS is a highly fault tolerant distributed file system that runs on low-cost commodity hardware, allowing very large amounts of data to be stored cheaply. MapReduce is a programming framework that breaks large data processing problems into pieces so they can be executed in parallel on lots of machines close to the data that they need to process. Hadoop provides a distributed, robust, fault tolerant data storage and manipulation environment that is well suited to the challenges of Big Data. The use of commodity hardware allows it to scale at low cost while the ability to apply the schema of data only when it is being read means Hadoop is very flexible for a wide variety of data types. Storage and processing are streaming-centric and this enables the environment to handle fast moving data. Hadoop is, however, a programmer-centric environment and there is no support for SQL in the base environment. Hadoop structures are better at batch processing than they are at interactive systems and Hadoop lacks any specific data mining or predictive analytics support. Hadoop has a lot of potential for companies adopting predictive analytics but it must be applied in context. Beginning with a business problema decision that must be madedetermines the analytics that will be required and thus what kind of data will be required. This creates a use case for Hadoop by identifying a business problem that requires data not already available in existing infrastructure. Organizations that lack familiarity with open source should consider one of the commercial organizations that support Hadoop. Once Hadoop becomes part of the data infrastructure for an organization it is important that it is supported by the rest of their decision management infrastructure. PMML is an XML standard for the interchange of predictive analytic models developed by the Data Mining Group. The basic structure is an XML format document that contains data dictionary. data transformations and models. PMML started in 1998 with 0.7, moving to a 1.0 release in 1999. Since then the standard has seen multiple releases with 4.1 being the most recent (in 2011). The 4.x releases marked a major milestone with support for pre - and post-processing, time series, explanations and ensembles. PMML offers an open, standards-based approach to operationalizing predictive analytics. Support for PMML is increasingly broad-based with analytic tools, databases, data warehouses and server deployments. Business rules and other development environments also increasingly support it. The primary challenge for PMML, as it is for any standard, is to get the vendor community to regard support for it as more than just a 8220check the box8221 capability. Standards such as PMML also struggle to get vendors to stay current and support the latest release. For PMML this is particularly an issue for the support in PMML 4.x of pre - and post-processing. Finally not everything that can be done in predictive analytic tools can be generated into PMML. All organizations approaching predictive analytics should include PMML in their list of requirements for products. Selecting analytic tools that do a good job of generating and consuming PMML and identifying operational platforms that can consume and execute PMML just makes sense. While organizations committed to a single vendor stack may be able to avoid this requirement, even there the ability to bring models developed by a consortium or third party into that environment may well prove critical while partners may need to execute models but not share the same vendor stack. Future Standards There are also some future developments that are worth consideringthe emergence of the Decision Model and Notation standard, growing acceptance of Hadoop 2 and planned updates to PMML. The Object Management Group recently accepted a new standard, the Decision Model and Notation standard. DMN as it is known is now finalized. DMN provides a common modeling notation, understandable by both business and technical users that allows decision-making approaches to be precisely defined. Hadoop 2.x (technically Apache Hadoop 2.2.0) was released in October of 2013. It8217s considered a future development because most Hadoop users are not using it yet. Hadoop 2.x is all about really all about YARNa resource management system that manages load across Hadoop nodes that allows other approaches besides MapReduce to be used. PMML Release 4.2 is expected to be released in the first half of 2014. As with 4.1, release 4.2 is expected to improve support for post-processing, model types and model elements. 4.2 is particularly focused on improving support for predictive scorecards (especially those with complex partial scores), adding regular expressions as built in functions, and continuing to expand support for different types such as continuous input fields in Nave Bayes Models. Additional material on standards in predictive analytics: Download the Standards in Predictive Analytics Thought Leadership Paper sponsored by The Data Mining Group, Revolution Analytics (now Microsoft) and Zementis here . Optimization and Simulation An optimization suite is an environment for defining and solving mathematical models and for simulating the differences between multiple similar mathematical models. An optimization suite allows a modeler or business analyst to define a business objective and a set of constraints and then solve this problem to see how best to run the business. Optimization suites support what is sometimes called Operations Research or Management Science. There are really three uses of optimization in the context of Decision Management Systems: When a decision has a potentially complex answer that involves multiple elements it may be effective to optimize the selection of these elements. When a decision answer is a single element then it may be useful to optimize across many decisions to allocate the available answers to each specific decision most effectively. When reviewing possible decision-making strategies as part of decision analysis it may be possible to use optimization to tune or select between these strategies. Optimization allows organizations to either find a feasible solution to a heavily constrained problem or to maximize the value gained from a constrained set of resources by finding the most profitable, quickest or cheapest combination of resources that are allowed. Optimization differs from both business rules and predictive analytics in a number of ways: Business rules are absolute where optimization need not be. For instance business rules allow an offer to be made to someone only if certain conditions are true where an optimization model might allocate offers based on where they will be most effective. Optimization can be effective when business rules are numerous and potentially contradictory as it allows for trade-offs between values where business rules require defined sets of conditions. An analytic model is created through analysis of historical data while an optimization model is built explicitly from business know-how and historical data may be used to see how the model would have worked in the past (though this is not necessary). Because predictive analytic models are built and executed separately they are often very quick to execute. Optimization models in contrast must be solved each time they are used and this can require significant time and resources. An optimization suite needs to support a range of activities: Defining a constrained optimization problem as a mathematical model using variables, an objective function and constraints both hard and soft. montior Solving this problem, often multiple times as elements of the problem are changed and re-assessed. Integration with a wide range of data sources so that data can be brought in and run through a defined optimization model. These data sources might be systems that are internal to the organization or external data. Simulation and comparison of different scenarios by a non-technical user to see what the best choice is likely to be going forward. An optimization suite gives modelers and possibly business analysts the ability to manage tradeoffs and constraints to find the optimal action to take. An optimization suite requires the following elements: At the core of defining an optimization model is a modeling language of languages. Some optimization suites have their own such language but a number of popular ones exist and some solvers (see below) can support several languages. Most optimization suites will provide an optimization model development environment suitable for modelers to specify models in one of more of these languages. This environment may be based on a commercial available IDE such as Eclipse or Visual Studio. Debugging and profiling tools allow modelers to review and change the model to correct for identified problems find conflicts, relax constraints or profile performance. Models can be complex and even unsolvable so profiling and debugging tools are essential to allow a viable model to be defined. Most optimization suites include multiple engines or solvers that apply mathematical techniques to the developed models to solve the problems defined in those models. These solvers can be specific to different kinds of problems such as linear programming problems, mixed integer problems, quadratic-problems and combinations such as mixed integer quadratic problems. These solvers may be used to run scenarios, to find optimal actions that can be loaded into a production system as a batch or can execute in a Decision Service to solve an optimization problem as part of a single decision. In addition, many standalone solvers are available. Optimization models are coded or constructed by hand but scenarios typically involve a large amount of data, often pulled from multiple data sources. An optimization suite must be able to connect to and retrieve information from a variety of structured and unstructured data sources as well as flat files of various kinds and present this data for scenario analysis. Many optimization problems require an interface that allows a business analyst or business user to run and compare scenarios based on these models and associated data. Such scenario analysis involves rich visualization and the ability to bring real world historical data into the system to run through the model. Optimization suites include either scenario analysis interfaces or the ability to rapidly generate such interfaces for a given model. The results of optimization can be deployed in a number of different ways. Deployment tools in an optimization suite may support the deployment of a model as results or recommendations, the packaging of a model to run against a solver running in another environment at run time or the conversion of optimal actions into rules that mimic the assignment of an optimal action. An optimization suite should offer an enterprise-class repository for storing and managing optimization models and associated scenarios. This repository may be a complete decision management repository that also stores business rules and predictive analytic models. It should provide access control and security, audit trails for changes made to models and versioning. Monitoring Decisions The final area of capability is that of monitoring and improving decisions over time. These capabilities are essential for Decision Management Systems both because decisions are high change components and because the time it takes a decision to come to fruition can be extensive, making it hard to tell good ones from bad ones. There are many drivers of change in decision making. Regulations change so organizations must change how they make eligibility decisions to remain compliant with those regulations. Policies change so organizations must, for instance, change their validation of suppliers to track new data requirements. Competitors change so organizations that wish to remain competitive must change their discounts or pricing. Markets, such as the financial or credit markets, change so organizations must constantly change the way they assess risk. Consumer behavior changes regularly and continually so organizations working with consumers must constantly address these changes in their decision-making. Finally, of course, fraudsters adapt and seek new loopholes to exploit so organizations must change how they detect and process fraud to focus on new fraud as it develops. In addition to outside changes that explicitly drive changes to decision-making, organizations want to continuously improve their decision-making. The challenge for some decisions is the time it takes for decisions to play out it may be weeks or months before an organization knows if the decision was a profitable one for instance. To continuously improve in these circumstances it is essential to be able to conduct experiments and compare their results. Such an experiment makes the same decision in two or more different ways, applying the different approaches to different transactions and comparing the results. Sometimes called adaptive control, champion-challenger or AB testing, these approaches drive continuous improvement in decision making. As shown in Figure 4: Continuous improvement in decision making below, this approach requires that the results of a decision be evaluated, predictive analytic models and business rules updated and refined and new challengers or alternatives developed. These are fed back into the decision-making loop and used to make future decisions. The results of these decisions are evaluated in turn with successful experiments being adopted, unsuccessful ones dropped and new ones developed in a continuing cycle. Figure 4: Continuous improvement in decision making The capabilities to support monitoring and improving of decisions are not typically found in a single software product. Instead these capabilities drive the requirements for products used for both decision logic management and embedding predictive analytics. The primary capability required for decision monitoring is that of logging decision execution. When a decision is made by the Decision Management System it must be possible to log how that decision was made, what business rules fired. This log should include any predictive analytic model scores calculated during the decision as well as the specific action recommended by the Decision Management System. In addition, these decision-making logs should be stored in a way that allows them to be integrated with information about the response of customers and others to the decision did the customer accept the offer, did the salesperson override the price with an additional discount, was the deal closed and so on. The long time results such as orders placed or customers retained that can be attributed to these responses are logged by other systems. It should be possible also to tie the decision-making specifics to these results. While logging is essential for ongoing improvement of decision making, logging also supports compliance and audit needs by providing complete execution transparency. When an audit or compliance review is conducted it will be possible to tell exactly how a decision was made and whether or not that decision followed the correct guidelines. To ensure continuous improvement of decisions it will often be necessary to conduct experiments. These experiments typically involve multiple approaches to either the decision logic of the decision, the predictive analytic models used in the decision or both. Additional decision logic must be managed to determine which of the approaches should be applied to a specific customer or transaction and it must be possible to record this as part of the decision itself. All products suitable for managing decision logic can manage experiments in this way. Some products for decision logic management have additional capabilities built in to make it easy to manage, review and compare the various approaches being used within a decision. The performance of a decision can and should be managed and monitored in the same way any other aspect of business performance is managed and monitored. Generally it is straightforward to apply the standard performance management capabilities of an organization to decision logs to see trends, hotspots, etc. While changes to decisions and decision logic are sometimes extensive, requiring all the capabilities described above, sometimes more localized and focused changes are required. These should generally be made by business users so that a full IT cycle can be avoided for what could be regular, minor updates. To make this work it must be possible to use the decision logic management capabilities for non technical users to present a business person with their own business rules, in context. Ideally this environment will only allow them to make changes that make sense and will present no unnecessary information. Most products for managing decision logic either include suitable interfaces or allow suitable interfaces to be developed. As noted above it is important to provide impact analysis tools to allow non-technical people to rapidly see the business impact of any changes they make. This should cover both design impact and execution impact and involves more business-centric functionality than is required for testing. Impact analysis tools may need to consider changes to decision logic, to predictive analytic models or to both. See above under Overall Architecture . When multiple decision making approaches are being used in parallel it will be essential that the effectiveness of these alternatives can be assessed. Capabilities such as swapset analysis (showing which customers, for instance, would get offer B rather than offer A) as well as more general comparison of business performance metrics are critical. In addition, simulation and what-if analysis tools that can use each alternative approach and compare the outcomes of multiple simulations based on the approaches will be required. Key Characteristics Experience in working with organizations that are developing Decision Management Systems shows that while there are many ways to develop them effectively, certain key characteristics come up repeatedly as critically important. These characteristics fall into a number of areas including the completeness of the platform, engagement of business users, architectural flexibility, organizational scale and decision monitoring. This set of characteristics is neither a definition of a complete set of features and functions required to build a Decision Management System nor a complete list of characteristics for any of the product categories. It is intended as a set of characteristics you can look for in products you are purchasing or using that will support a focus on Decision Management Systems. A small number of vendors offer a complete platform for building Decision Management Systems. These platforms handle decision logic or business rules, support data mining and predictive analytic modeling, include constraint-based optimization and provide monitoring and integration capabilities for deployed systems. While it is not necessary to buy a complete platform from a single vendor, it is valuable for products to see themselves as part of a broader ecosystem. For instance Business Rules Management Systems that are aware of predictive analytics and offer integration with such systems and predictive analytic workbenches that offer business rules-friendly deployment options are more suitable for Decision Management Systems than more narrowly focused products. Complete Platform A complete platform is an integrated set of offerings that allow for the management of decision logic, the building and deployment of predictive analytic models and the mathematical optimization of decisions. These offerings are either a single product or a product set with a common user interface, shared repository and common tooling that operates across the products. Support for decision monitoring and analysis is either provided or the data is made available to standard reporting and dashboard components. Complete Ecosystem A company may not offer a complete platform for Decision Management Systems themselves while still supporting a complete platform through their ecosystem. By supporting open standards such as PMML and by partnering with other vendors that offer more pieces of the puzzle, vendors can offer a Complete Platform Ecosystem. Some companies are not focused on Decision Management Systems but on providing a specific component. They may be focused only on managing decision logic, building predictive analytic models or constraint based optimization. They may not even think of themselves as participants in the development of Decision Management Systems. These companies are not likely to have a complete platform nor are they likely to actively partner to develop a complete platform ecosystem. Their products can still be easy to integrate and use alongside other products and can support standards such as PMML for predictive analytic models or JSR-3311 for constraint-based optimization. For standalone products focused on a specific technology market this kind of openness is critical in being part of a complete platform for Decision Management Systems. The agility and adaptability of Decision Management Systems crucially relies on the engagement of business users. The extent to which products being used to build these systems can bring business users into the development team is therefore critical. Products that focus on allowing business users to read and write decision logic, participate actively in building or reviewing analytic models and allow non-technical users to run through scenarios are more likely to be successful that those focused only on technical developers. Business User Analytic Modeling Analytic tools can engage business users by providing an environment designed for non-technical users to create and use analytic models. This might be a complete environment that uses automation and machine learning algorithms to build predictive analytic models with minimal data mining expertise required. Such use of machine learning algorithms and automation should be complemented by user interfaces, reporting and automated checks designed to support a less knowledgeable user. These additional capabilities can ensure that problems such as overfitting are avoided and that test and validation data is automatically set aside, for instance. Alternatively a product might be a business user friendly environment layered on top of a more typical data miningpredictive analytic workbench. This would use wizards and other simplifying features to make it possible for non technical users to do data mining and create predictive analytic models. Generally these environments integrate with the more traditional modeling environment and store models in the same repository such that modeling specialists can refine or enhance the models built using these less technical interfaces. Business User Rule Management The management of decision logic by non-technical users, non programmers, is a key element in delivering the agility required of a Decision Management System. While any product focused on managing business rules or decision logic might be said to allow some business user rule management, true business user rule management requires a number of elements. First the rules themselves must be approachable. Using declarative statements in place of procedural code so that each rule can be considered and edited independently as well as the use of a business user vocabulary not technical data element names in rules will ensure readability and clarity. Supporting a verbose, readable syntax in near natural language rather than terse programmer-centric constructs helps as does emphasizing graphical editing of rules in decision tables, rule sheets, decision trees, decision graphs and decision flows so that as little as possible has to be written out longhand. Because business users are not programmers, testing tools need to be accessible to them and excellent completeness and correctness checks are essential. Ideally these tests are performed inline, as business rules are edited, to ensure that obvious mistakes and omissions are avoided. Business users do not like technical environments designed for programmers so support for the editing of business rules outside these environments through a web interface or a point and click, business friendly editing environment makes rules more accessible. Such editing environments might include support for editing using standard Microsoft Office products also. Ideally these editors will be embeddable or mashable so that rule management can be embedded in environments focused on a specific business task rather than on rule editing. Support for a learning curve Most organizations will not be able to jump straight to either business user analytic modeling or business user rule management. They will need to bring business analysts on board first, exposing some of the tasks previously performed by IT or analytic teams to these semi-technical users. Over time the role of business analysts can be expanded and true business users brought in to work on certain elements. Products that provide way stations and gradual increases in complexity through multiple editing environments will be easier to adopt than those with a more limited set of options. For example, a product that allows users to bring analytics to bear incrementally rather than all at once will improve analytic adoption. If simple interfaces allow access to candidate association rules and proposed splits in decision trees then users will become gradually accustomed to the power of analytics to improve their decision logic. Over time they can be exposed to completely data-driven decision trees, unsupervised clustering and ultimately more complex predictive analytic models. Similarly a variety of rule editors that allow a user only to change numeric values in a locked-down rule can get users used to the idea that they can change decision logic. Over time editors that allow new rules to be built based on templates and perhaps new rules to be built using a point and click editor can bring users up the learning curve gradually. Impact Analysis One of the biggest barriers to business users taking ownership of their decision logic, in fact probably the biggest single one, is an inability to see what the impact of a change will be. Products that provide strong impact analysis tools, especially tools that allow a non technical user to see how the change they are considering will impact their business results, will be more able to drive successful business user engagement. Impact analysis can be done using functions designed for regression or performance testing. The ability to do impact analysis and simulation using real data in a business friendly way is invaluable, however. This involves simple to use interfaces for loading data, an ability to assign business value to different outcomes and generally the ability to get results into Excel for further analysis. Ideally the environment should continuously perform impact analysis as changes are being made so that all edits are made in the context of their business impact. Impact on application context Simulating the impact of a change on the application or process context in which the decision is being made is sometimes critical. For instance, the impact of a change to fraud detection decisions may be best considered in terms of the workload create for fraud investigators and the average handle time for fraud cases. These measures are not measures of the decision performance alone but include process application design issues. An ability to determine the broader business impact of specific changes made to decision-making is a potentially very powerful capability. It may seem that this requires the application or process context and decision-making technologies to share a vendor. Certainly this capability is easier to provide in those circumstances but most organizations will ultimately find themselves in a more heterogeneous environment as noted above, so more flexible capabilities that allow decision simulation to be integrated with simulation capabilities from other tools should be valued as well as more packaged capabilities. Decision Management Systems automate decisions that must often be used in multiple channels where these channels may be supported by different applications and architectural approaches. Decisions may need to be made as part of business processes, in response to events detected or in support of legacy environments. A degree of architectural flexibility is therefore very useful in products used to develop Decision Management Systems. Support for multiple platforms and deployment styles as well as a wide range of integration options helps a lot. Cloud ready The recent growth of the cloud as a platform for enterprise applications means that more organizations are increasingly relying on cloud-based solutions for CRM, HR and other applications. Because Decision Management Systems must integrate with these systems it is becoming increasingly important for products used to build Decision Management Systems to be cloud-ready. This means being able to connect to cloud-based systems to access data and being deployable to the cloud so that decision services can be easily integrated into cloud-based systems. The cloud also has a lot to offer for the development of Decision Management Systems. Many tasks, such as building predictive analytic models and running simulations or impact analysis, require a great deal of computing power. Being able to push analytic modeling tasks, simulation runs and impact analysis execution onto cloud-based resources means these can be run in background while the user works on something else using their personal computer and can greatly increase the scope of what is possible in these tasks. For products being evaluated for Decision Management System construction the ability to integrate cloud resources for these high compute power tasks offers great productivity increases. Heterogeneous environment Most organizations of any size have a heterogeneous environment with multiple operating systems, multiple databases, different communication protocols etc. Different channels have different systems, mobile devices and in-store or kioskATM machinery is unique and organizations often have layers of computing equipment of different ages. No organization ever has a single, coherent architecture across all its systems, at least not for long. Because decision-making components must often support multiple channels and be consistent across multiple systems, products for Decision Management Systems should have multiple deployment options and be easy to deploy and integrate with these different operational environments. Organizations are often heterogeneous in another way. Some organizations use multiple business rules management systems, many use multiple predictive analytic workbenches as analytic modelers choose their own or use a tool to get access to a specific algorithm. Tools that recognize they must operate in this environment will generally be preferred therefore, especially in the ongoing evolution and management of decision management systems e. g. in analytic model management. Embeddable management and control components Decision Management Systems do not stand alone. In particular the management and control of Decision Management Systems should be easy to integrate into other management and control interfaces. For instance it should be easy to integrate analytic model management reporting with more general business performance reporting and business rule management components should be embeddable in other interfaces. A key criteria then for products used to build Decision Management Systems should be how easy is it to embed management components into portals and dashboards built using other tools, feed analytic model management data or rule performance data into a regular performance management environment and so on. There is tremendous interest in 8220Big Data8221 at the moment as the rapid growth of social media, weblog data, sensor data and other less traditional datasources creates new challenges for managing this information. While much of this interest has been around supporting queries and reporting, organizations are beginning to use these new data sources in their Decision Management Systems. Products that can support both traditional and newer Big Data sources therefore offer increased scope for organizations going forward. Support for Big Data involves being able to bring potentially very large amounts of data stored in NoSQL systems such as Hadoop into analytical modeling as well as into the operational environment. As many of these sources are less structured it also involves supporting text analytics and operations that use text operators. Flexibility in data definition, so that the variety and velocity of these data sources do not disrupt operations will also make a big difference. Big Data is often described in terms of an increase in volume, an increase in velocity and an increase in variety: More data, of more types, arriving more quickly. This increase in Big Data volume, variety and velocity has clear implications for Decision Management Systems. See the section on Big Data for more details. Most products used for developing systems are unconcerned with the operation of those systems once they are deployed. Products used to develop Decision Management Systems, in contrast, offer much more value if they are able to support the ongoing monitoring and improvement of the decision-making embedded in those systems. Products that provide analysis and other tools that integrate with deployed systems are particularly useful in this regard. Decision Performance Measuring overall decision performance by tying decision outcomes and decision-making approaches to business results is an important aspect of Decision Management Systems. In practical terms this means be able to easily log the decisions made, including those made as part of AB or ChampionChallenger tests, so that they can be integrated with overall business performance data in a reporting environment. Products that allow this kind of recording to be done automatically or with flags and settings rather than code are preferred as they create a lower maintenance overhead and are more likely to stay up to date as time passes and the decision making in the system evolves. Model Performance Predictive analytic models are generally built at a point in time and so their performance, in terms of how predictive they are, degrades over time. Predictive analytic workbenches that provide automated facilities for monitoring model performance, for identifying models whose performance is degrading, are to be preferred over those that require a development team to hand code this kind of model performance monitoring. In addition it is often helpful if model performance monitoring tools can support models built in multiple environments as this is a common situation. Rule Execution One of the most important ways in which decisions can be monitored is through logging the rule execution involved in the decision. While this kind of logging can be hand coded into almost any system, a tool that allows this to be turned on and off for different parts of the decision, that handles this automatically as a background task and that supports it without a significant performance impact is highly desirable. Logs that can be easily stored in database tables and used for reporting and logs that can be easily converted into or viewed in their more verbose format (using actual rule names for instance rather than ids) are also more useful. As with any technology, performance and scalability should be considered as part of a product selection. Most of the products listed in the appendix are scalable and perform well enough for most if not all scenarios. Organizations with specific and very challenging performance and scalability requirements should be sure to consider these explicitly. For most organizations it is enough to look for solid scalability and for performance adequate to support real-time decisioning. Scales up and out In general it is more important to consider if a tool scales well than to assess its particular performance on a given piece of hardware. If a product scales up and out well then more or more powerful hardware can be bought and used effectively as demands increase. If a product does not scale then, even if its initial performance is superior, an organization runs the risk that future demands cannot be met. Products that support multi-core processors, in-memory processing and distributed processing will scale better than those that do not. This is especially important in high compute power functions such as analytic modeling, optimization, simulation and impact analysis. There is a general move from batch to real-time decision making in organizations of all sizes and types. Many initial Decision Management Systems, however, are batch oriented or have demands that are not truly real-time with several seconds allowed for responses. Over time most organizations should expect to see more demand for real-time decision making as well increasing needs to support streamingevent-based systems. Products that have the kind of low-latency, time based capabilities these solutions need and therefore to be preferred over those that do not. Organizations adopting Decision Management Systems generally start with only a single project or two. Over time they become aware of the ROI of Decision Management Systems and the potential for them to change how their organization, its systems and business processes operate. At this point they begin to scale up their plans for Decision Management Systems. Most organizations do not wish to replace the tools with which they are familiar with new tools during this expansion. As a result products with characteristics that support organizational scale will be usable longer. In particular products that support industrialized analytics and enterprise rule management will scale to organization-wide use. Industrialized analytics When organizations first adopt predictive analytic models they generally only build one or two models. These models are often hand crafted by an analytic practitioner and then deployed by hand into an operational environment through batch updates or manual re-coding of the model. As the use of predictive analytics expands, however, hundreds or even thousands of analytic models may be required by the organization. These models must also be monitored and regularly updated if they are to maintain their level of predictability. Given that most organizations cannot simply recruit many more analytic professionals, a more scalable process is required. An industrialized analytic process emphasizes the use of automation in model construction, both to prepare and analyze data sources and to perform some or all of the modeling itself. It focuses on rapid deployment of models to real-time operational environments and monitors these models automatically to identify when they need to be re-built. Analytic professionals are engaged to handle difficult problems, to check on models that show problems or otherwise to supervise and manage a largely automated production line for analytic models. Supporting this environment requires analytic tools that emphasize scale and automation not just model precision. Enterprise rule management For decision logic the problem is slightly different. Reviewing business rules, comparing them to new regulations or policies and making appropriate changes are still manual activities, even when scaling business rules to the whole organization. The challenges come in being able to find the business rules that matter, ensuring the business rules that should be reused are reused, and in handling governance and security policies. When there are many rules that are owned by different groups and when reuse means that no one organization handles all the business rules in a decision, enterprise-scale management capabilities become essential. A product that allows federated storage of business rules in multiple repositories, that provides robust integration options with other repositories such as those for services and business processes, and that supports a variety of repository structures will be better able to scale. Similarly support for approval workflows, integrated security and good user management capabilities will be important. Best Practices in Decision Management Systems There are four key principles of Decision Management Systems: Begin with the decision in mind Be transparent and agile Be predictive not reactive Test, learn and continually improve Within each of these principles it is possible to identify a number of specific best practices in analysis and design, in development, in deployment and in operation. Begin with the decision in mind Decision Management Systems are built around a central and ongoing focus on automating decisions, particularly operational and micro decisions. Developing Decision Management Systems with a focus only on business processes, only on events or only on data is not effective. Understanding the business process or event context for a decision is helpful but the development of Decision Management Systems requires a focus on decisions as a central component of enterprise architecture. Focusing on operational or transactional decisionsthose that affect a single customer or single transactionis a significant shift for most organizations and requires a conscious effort. In particular where the operational decision in question is what is known as a micro decision, one that focuses on a how to treat a single customer uniquely rather than as part of a large group, organizations must learn to focus on decision-making at a more granular level than previously. It is also worth nothing that this focus on decisions must come first, before a focus on business rules or predictive analytic models. When it comes to developing Decision Management Systems, the right business rules and most effective predictive analytic models can only be developed if there is a clear decision focus. While the most basic best practice is encapsulated in this principlebegin with the decision in mindthere are some more specific best practices that should be followed. Decisions as peers for Process One of the most important aspects of building Decision Management Systems is to ensure that decisions start being treated as peers to business processes. Many organizations that are being successful with SOA and that are successfully adopting new and more advanced development technologies and approaches have done so using a business process focus. A focus on the end to end business process, not on organizational or system silos, and the tying of these business processes to real business outcomes represents a significant improvement in how information technology is applied to running an organization. To move forward with Decision Management Systems, however, it is necessary to do more than regard decisions as just part of a business process. Our work with clients as well as the evaluation of results from multiple companies shows that organizations that can manage decisions as peers to business processes do better. While it is true that decisions must be made to complete most business processes, simply encapsulating the decisions within the business process is not enough. Decisions are true peers for processes. Decisions are often re-used between processes and how a decision is made has a material difference on how the process executes. Failing to identify decisions explicitly can result in decision-making logic being left in business processes making them more complex and harder to change. Identifying high level decisions at the same time as you identify high level processes allows your understanding of both to evolve in parallel, keeping each focused and simpler. Link Decisions to business outcomes and results Your business can be thought of as a sequence of decisions over time. Organizations make strategic decisions, tactical decisions and operational decisions but each decision, each choice, affects the trajectory of the business. In fact, given that each choice you make about products, suppliers, customers, facilities, employees and more is a decision it is clear that decisions are the primary way in which you have a impact on the success or failure of your business. If there is no decision to make then there is no way for the organization to affect its destiny. One of the first steps, then, in understanding your decisions so that they can drive the development of effective Decision Management Systems, is linking them to business outcomes and results. For each decision you identify it is important to understand what key performance indicators, objectives, or business performance targets are impacted by the decision. Understanding that a particular decision has an impact on a particular measure and understanding the set of decisions that impact a measure has two important consequences. First it enables you to tell the difference between good decisions and bad decisions. A good decision will tend to move the indicators to which it is linked in a positive direction, a bad one will not. Second it enables you to see how you can correct when a measure gets outside acceptable bounds or moves in a poor direction. Understanding which decisions could be made differently gives you an immediate context for solving performance problems. Building links between decisions you identify and your performance management framework is important as you identify and design your decisions. It is also important to use this information to present options and alternatives to those who are tracking the objectives in a performance management context. Understand decision structure before beginning Identifying decisions early, considering them as peers to processes and mapping them to your business performance management environment are all great ways to begin with the decision in mind. Before you start developing a Decision Management System, however, you should understand the structure of your decisions. The most effective way we have found to do this while working with clients is to decompose decisions to show their dependencies. Decisions are generally dependent on information, on know-how or analytic insight, and on other (typically more fine grained) decisions. Having identified the immediate dependencies of a decision you can the evaluate each of the decisions you identified and determine their dependencies in an iterative fashion. The dependency hierarchy you develop will actually become a network as decisions are reused when multiple decisions have a dependency on a common sub-decision. This network reveals opportunity for reuse, shows what information is used where and identifies all the potential sources of know-how for your decision making whether regulations, policies, analytic insight or best practices. For more on this approach please see the author8217s book Decision Management Systems and Alan Fish8217s Knowledge Automation in the list of works cited. Use a standards-based decision modeling technique Building a Decision Requirements Model using the new Decision Model and Notation (DMN) standard captures decision requirements and improves business analysis and the overall requirements gathering and validating process. The Object Management Group (OMG) finalized DMN in Spring 2015. Decision modeling is a powerful technique for business analysis and for enterprise architecture. While important for all software development projects, decision requirements are especially important for Decision Management Systems projects adopting business rules and advanced analytic technologies, providing a repeatable, scalable approach to scoping and managing the decision-making where rules and analytics are most effectively applied. Why model decisions Experience shows that there are three main reasons for defining decision requirements as part of an overall requirements process: Current requirements approaches don8217t tackle the decision-making that is increasingly important in information systems. While important for all software development projects, decision requirements are especially important for projects adopting business rules and advanced analytic technologies. Decisions are a common language across business, IT and analytic organizations improving collaboration, increasing reuse, and easing implementation. Gaps in current requirements approaches Today organizations use a variety of techniques to accurately describe the requirements for an information system. Most systems involve some workflow and this is increasingly described by business analysts in terms of business process models. Experience shows that when process modeling techniques are applied to describe decision-making, the resulting process models are over complex. Decision-making modeled as business process is messy and hard to maintain. In addition, local exceptions and other decision-making details can quickly overwhelm process models. By identifying and modeling decisions separately from the process, these decision-making details no longer clutter up the process. This makes business processes simpler and makes it easier to make changes. However identify decision-making as a task in a process (or as a step in a use case or as a requirement) can result either in long, detailed descriptions that are confusing and contradictory or short descriptions that lack the necessary detail. All this has to be sorted out during development, creating delays and additional costs. By modeling the decisions identified, a clear and concise definition of decision-making requirements can be developed. A separate yet linked model allows for clarity in context. Special needs of business rules and advanced analytics projects Successful business rules and analytic projects begin by focusing on the decision-making involved. For business rules projects, clarity about decision requirements scopes and directs business rules analysis. For advanced analytic projects a clear business objective is critical to success. Evidence is growing that specifying this objective in terms of the decision-making to be improved by the analytic is one of the most effective ways to do this. In both cases, then, it is essential to first define the decision-making required and only then focus on details like the specific business rules or predictive analytic models involved. Specifying a decision model provides a repeatable, scalable approach to scoping and managing decision-making requirements for both business rules and analytic efforts. Today, many business rules analysis can seem never-ending, with teams trying to capture all the rules in a business area. The result is often a big bucket orules that are poorly coordinated and hard to manage. Instead, by understanding which decisions will be made, when and to what purpose, it is now easy to tell when business rules analysis is complete. For analytics projects, established analytic approaches such as CRISP-DM stress the importance of understanding the project objectives and requirements from a business perspective, but to date there are no formal approaches to capturing this understanding in a repeatable, understandable format. Now business analysts have the tools and techniques of decision requirements modeling to identify and describe the decisions for which analytics will be required. How the data requirements support these decisions, and where these decisions fit, is clarified and the use of analytics focused more precisely. Read more in the Decision Discovery Section of the Appendix. Decisions as a shared framework and implementation mechanism Decision modeling provides a framework that teams across an organization can use and that works for business analysts, business professionals, IT professionals and analytic teams. Decisions are more easily tied to performance measures and the business goals of a project. This makes it easier to focus project teams where they will have the highest impact and to measure results. Many business analysts have known all along that decisions, and decision-making, should be a first class part of the requirements for a system. Systems that assume the user will do all the decision-making fail to deliver real-time responses (because humans struggle to respond in real-time), fail to deliver self-service or support automated channels (because there is no human available in those scenarios) and fail front-line staff because instead of empowering them with suitable actions to take it will require them to escalate to supervisors. What business analysts have lacked until now is a standard, established way to define these requirements. Decision modeling is a powerful emerging technique for business analysis. Using the standard DMN notation to specify Decision Requirements Diagrams and so specify a Decision Requirements Model allows the accurate specification of decision requirements. Enterprise Architects meanwhile are chartered with fitting business rules and analytic technologies like data mining and predictive analytics into their enterprise architecture. A service oriented platform and architecture, supported by integration and data management technology does not have obvious holes for these technologies. Decisions are both the shared framework and the technical mechanism to easily implement these technologies. Be transparent and agile The way Decision Management Systems make each decision is both explicable to non-technical professionals and easy to change. Decision-making in most organizations is opaqueeither embedded in legacy applications as code or existing only in the heads of employees. Decisions cannot be managed unless this decision-making approach is made transparent and easy to change or agile. As noted in Managing Decision Logic above this need for both design and execution transparency is the primary driver for the use of a Business Rules Management System to manage decision-making logic. Three main best practices are relevant in this areadesign transparency, execution transparency, business ownership and explicable analytics. Design Transparency for business and IT The first best practice in transparency is that of ensuring design transparency for both business and IT practitioners. Most code that is written is completely opaque as far as non-technical business users are concerned. Much of it is even opaque as far as programmers other than the one that wrote it are concerned. This lack of transparency is unacceptable in Decision Management Systems. Design transparency means writing decision logic such that business practitioners, business analysts and IT professionals that were not involved in the original development can all read and understand it. This allows the design of the decision-making to be transparent as everyone involved can see how the next decision is going to be made. This supports both compliance, by allowing those verifying compliance to see how decisions will be made, and improves accuracy by ensuring that everyone who knows how the decision should be made can understand how the system plans to make it. From a practical perspective this means writing all business rules so they can be read by business people (even those that will be edited by IT going forward) by avoiding technical constructs such as and terse programmer centric variable names for instance. It means ensuring that a business friendly vocabulary underpins the rulesthe use of IT-centric names for objects and properties is one of the biggest reasons business people cannot understand business rules. It also means using graphical decision logic representations such as decision tables and decision trees whenever possible and following rule writing best practices like avoiding ORs and writing large numbers of simple rules instead of a small number of large complicated ones. Design transparency is the fundamental building block for all other kinds of transparency and for agility. Execution transparency and decision logic logging It is essential to understand how the next decision will be made. Once decisions have been made, however, it will also be necessary to understand how they were made. The approach to the next decision will change constantly as business situations change or new regulations are enforced. The way the next decision will be made therefore diverges steadily from the way a decision was made in the past. Execution transparency means being able to go back and look at any specific decision to determine exactly how it was made. The decision logic and predictive analytic models used to make the decision must be recorded, logged, so that the decision-making sequence is clear. Ideally this should be left on all the time so that every decision is recorded rather than being something that is only used for testing and debugging. When every decision can be analyzed, ongoing improvement becomes much easier. In an environment where any decision can be challenged, by regulators for example, then such ongoing logging may be required. Most products support logging to a fairly technical format designed for high performance and minimal storage requirements. This will need to be expanded to be readable by non-technical users and integrated with other kinds of data (such as customer information or overall performance metrics) to deliver true execution transparency. Explicable analytics While the use of well formed business rules to specific decision logic makes the biggest single contribution to transparency, explicable analytics have a role also. When decisions are made based on specific predictive analytic scores it will be important to be able to understand how that score was calculated and what the primary drivers of the score were. Just like decision logic, the way a score is calculated is likely to evolve over time so it is important that the way a score was calculated at a particular point in time can be recreated. Some predictive analytic models are more explicable than others. The use of predictive analytic scorecards based on regression models, for instance, allows the contributions to a predictive score to be made very explicit and supports the definition of explanations, reason codes, that can be returned with the score. Thus a customer may have a retention score of 0.62 with two reason codes Never renewed and Single product that explain where that low score comes from. Decision trees, association rules and several other model types are also easily explicable. In contrast models such as neural networks and other machine learning algorithms as well as compound or ensemble methods involving multiple techniques are often much less explicable. The value of explicable analytic techniques varies with the kind of decision involved with regulated consumer decisions putting a premium on explicability while fraud detection, for instance, does not. Business ownership of change The final best practice is to focus ownership of change in the business. This means empowering the business to make the changes they need to the system when they need those changes made or when they see an opportunity in making a change. Business ownership of change is not essential for a successful Decision Management System. Many, most, of such systems still use IT resources to make changes when necessary. Often these are less technical resources, business analysts rather than programmers, but it is still IT that makes and tests and changes. Over time most organizations will find that business ownership will improve the results they get from their Decision Management Systems. By empowering business owners to make their own changes (using capabilities like business user rule management and impact analysis) organizations will increase their agility and responsiveness, eliminating the impedance of the businessIT interface. Empowering the business to own their changes is not a trivial exercise, however, and cannot be simply asserted (here you go, heres your new business rules interface now please stop calling us). An investment in suitable user interfaces and tools will be required along with time and energy invested in change management. Be predictive, not reactive Decision Management Systems use the data an organization has collected or can access to improve the way decisions are being made by predicting the likely outcome of a decision and of doing nothing. Decisions are always about the future because they can only impact the future. All the data an organization has it about the past. When information is presented to human decision-makers it is often satisfactory to summarize and visualize it and to rely on a humans ability to extract meaning and spot patterns. Humans essentially make subconscious or conscious predictions from the historical data they are shown and then make their decisions in that context. When building Decision Management Systems, however, this approach will not work. Computer systems and Business Rules Management Systems are literal, doing exactly what they are told. They lack the kind of intuitive pattern recognition that humans have. To give a Decision Management System a view of the future to act as a context for its decision-making we must create an explicit prediction, a probability about the future. Technology for this is described in Embedding Predictive Analytics and In-database Analytic Infrastructure above. Three best practices relate to this focus on turning data into insight. The use of data mining and other analytic techniques to improve rules and analyticIT cooperation are best practices in development approaches. A focus on real-time scoring will make for more powerful Decision Management Systems. Using data mining with business rules Many organizations building Decision Management Systems keep their rules-based development of decision logic and their use of analytics completely separate. At best they only bring the two disciplines together when they reference a predictive score in a business rule. This is a pity and a clear best practice is to do more to drive collaboration in this area, specifically by engaging data miners and data mining approaches in the development of business rules. To get started with this best practice the first step is to use analytical techniques to confirm and check business rules. Many business rules are based on judgment, best practices, rules of thumb and past experience. The experts involved in defining these rules can often say what the intent behind them isthat a rule is to help determine the best customers or to flag potentially delayed shipments for example. Historical data can be used to see how likely these rules are to do what is intended. For instance the number of customers who meet the conditions in a best customer rule or the correlation between the elements tested in the delayed shipment rule and actual delays in shipments. Using data in this way both improves the quality of business rules and helps establish the power of data to improve decision-making. While reporting and simple analysis tools can help in this area, the use of data mining is particularly powerful for these kinds of checks. More sophisticated organizations can also use data mining to actually find candidate business rules. Many data mining techniques produce outputs that can easily be represented as business rules such as decision trees and association rules. Using these techniques to analyze data and come up with candidate rules for review by those managing the decision-logic can be very effective. Because the output is a set of business rules it is visible and easy to review, breaking down the kind of reluctance that more opaque forms of analytics can provoke. At the end of the day the best practice is simple to defineorganizations should regard their historical data as a source of business rules just like their policies, best practices, expertise and regulations. Analytic and IT cooperation The power of predictive analytics is sometimes described as the power to turn vertical stacks of data (data over time) into horizontal information (additional properties or facts). Analytics professionals almost always look at data this way, seeking patterns in historical data that can be turned into probabilities or other characteristics, using analytics to simplify large amounts of data while amplifying its meaning. The challenge is that IT people do not think of data in the same way. IT departments tend to think of historical data as something to be summarized for reporting and as something to moved off to backup storage to reduce costs or improve performance. They are very familiar with the design of a horizontal slice of the dataits structurebut not with how it ebbs and flows historically. They will often change data structures to improve operations without considering how it might affect historical comparisons, clean data to remove outliers and to include defaults, or overwrite values as time passes and data changes. Many of these kinds of standard IT tasks are very damaging from the perspective of an analytics team. A clear best practice then is to improve analytic and IT cooperation around data governance, data storage and management, data structure design and more. In this context the analytic team cannot just be the Business Intelligence, dashboard and reporting team but must include those doing data mining and predictive analytics. While the former are often part of the IT department and well integrated with the rest of the IT function, the latter are often spread out in business units or focused in a risk or marketing function. Building cooperation over time between analytic specialists and IT will reduce costs, improve the value and availability of data for more advanced analytics and make integrating analytics into Decision Management Systems easier. Real-time scoring not batch A clear majority of organizations applying predictive analytic models today do so in batch. Having developed a predictive analytic model they run daily or weekly updates of their database, adding a score calculated from a model to a customer or other record in the database. When a Decision Management System needs access to the prediction it simply retrieves the column that is used to store the score. Integration is easy because the Decision Management System accesses the score like it does any other data item. The problem with this is that batch scores can get out of date when data is changing more rapidly than the batch is being run. For instance, a customer propensity to churn score that does not include the problem the customer had this morning or the inquiry they made about cancellation penalties is not going to be accurate. In addition this arms-length integration may be technically simply but it also keeps the IT and analytic teams from needing to work together and is therefore potentially damaging in the long term. For long term success with Decision Management Systems, and in particular to develop the kinds of Decision Management Systems that will allow you effective response to events and new more mobile channels, organizations need to develop systems that use real-time scoring. A real-time score is calculated exactly when it is needed using all the available data at that moment. This might include recent emails, SoMoLo (Social Mobile Local) data, the opinion of a call center representation on the mood of the customer and much more. Ultimately being able to decide in real time using up to the second scores, or even score data as it streams into a system so that predictions are available continuously, will be a source of competitive advantage. Test, learn and continually improve The decision-making in Decision Management Systems is dynamic and change is to be expected. The way a decision is made must be continually challenged and re-assessed so that it can learn what works and adapt to work better. Supporting this kind of ongoing decision analysis requires both design choices in the construction of Decision Management Systems and integration with an organizations performance management environment. Both Business Rules Management Systems and Predictive Analytic Workbenches have functionality to make this easier while Optimization Suites can be used to develop models to manage the potentially complex trade-offs that improving decision-making will require. This kind of continuous improvement relies on many of the features noted earlier such as being able to link decisions to business outcomes and results, having execution transparency and decision logic logging and support for real-time scoring not batch. In addition the development of integrated environments for ongoing decision improvement, broad use of experimentation and moving to automating tuning, adaptive analytics and optimization are all best practices worth considering. Integrated decision improvement environment To provide an integrated decision improvement environment, organizations should bring together the logs they have on how decisions have been made in the past, information about the business results they achieved using these decisions and the decision logicanalytic management environment itself. Each piece of this environment typically involves a different piece of technology to develop with everything from a business rules management system to an analytic model management tool to traditional dashboard and business intelligence capabilities being used. Providing an integrated, coherent environment where all this is brought together around a particular decision offers real benefits to an organization. When business results can be compared to the decision making that caused them and when the business owner can navigate directly from this analysis to editors allowing them to change future decision-making behavior, organizations will see more rapid and more accurate responses to changing conditions. Broad use of experimentation Relatively few organizations are comfortable with experimentation. For most, experiments are confined to the marketing department or to low volume experiments where customers and prospects are quizzed on preferences or likely responses. Some organizations use experimentation to determine price sensitivity and a growing number of web teams use experimentation for website design. Yet without experimentation it is very hard to see if what you are doing is the best possible approach or to truly see if a new approach would work better. Unless the behavior of real customers or prospects (or suppliers or partners) is evaluated for multiple options, those options cannot really be compared. Asking people what they would do if they got a different option rarely results in data that matches what they actually do when they get that different option. Organizations that wish to succeed in the long term with analytics and with Decision Management Systems will invest in the organizational fortitude and expertise required to conduct continuous and numerous experiments. Moving to automated tuning and adaptive analytics The logical extension of a focus on real-time is to focus on automated tuning and adaptive analytics. Today most Decision Management Systems and the analytics within them are adapted manually, with experts considering the effectiveness of the decision and the making changes to improve it. As systems become more real-time, however, this becomes increasingly impractical and suboptimal. Especially in very high volume, quick response situations such as ad serving, the system is continually gathering data that shows what works and what does not. Waiting until a person considers this data before changing the behavior of the system means allowing the system to make poor responses long after the data exists to realize this is going on. The best practice is to consider the use of machine learning and adaptive analytic engines in these circumstances. Building trust in the organization that analytics work will increasingly allow analytic systems to be left to make more of the decision themselves. Allowing analytic engines to collect performance data and respond to it, perhaps within defined limits, will improve the performance of real-time decision making while reducing the length of time it takes to respond to a change. Not all decisions are suitable for these kinds of engines. For instance those decisions that have a strong regulatory framework or where the time to get a response to a decision is long will not work well. Where a decision is suitable, however, a clear best practice is to integrate these kinds of more adaptive engines into Decision Management Systems. Optimization One final best practice in this area is to increase the use of optimization over time. A powerful approach, optimization is often siloed into specific parts of the business and regarded as a little bit of a side bar to core analytic efforts. In part this is because the mathematics can be very complex and because the solutions can take a long time to develop. A lack of business user friendly interfaces for reviewing results and a need to integrate optimization with simulation tools also limit the use of optimization in many organizations. This is beginning to change, however, as more business friendly interfaces are developed and as optimization tools become more integrated into the overall stack for developing Decision Management Systems. Faster and more stable optimization routines, standard templates and integration with both predictive analytics and business rules are also helping. Organizations should regard the use of optimization as part of their decision design and improvement processes as a best practice and should seek therefore to bring it out of its silos and into the mainstream. There are many compelling use cases for Decision Management Systems. Any time an organization must make a decision over and over again and where the accuracy or consistency of that decision, its compliance with regulation or its timeliness are important, Decision Management Systems can play an important role. Organizations can often spot such decisions by including a decision requirements step in their enterprise architecture framework and then looking for decision words such as determine, validate, calculate, assess, choose, select and, of course, decide. For instance: Determine if a customer is eligible for a benefit Validate the completeness of an invoice Calculate the discount for an order Assess which supplier is lowest risk Select the terms for a loan Choose which claims to Fast Track As noted in Suitable Operational Decisions, the decisions have certain characteristics that make managing decision logic, optimizing trade-offs and embedding predictive analytics valuable. It is useful to categorize decisions into various types, though some decisions include characteristics of several types. For instance: Eligibility or Approval Is this customerprospectcitizen eligible for this productservice This are made over and over again and should be made consistently every time. The use of a business rules-based system to determine eligibility or to ensure that a transaction is being handled in a compliant way is increasingly common. These decisions are policy and regulation-heavy and the use of a Business Rules Management System to handle all the business rules is very effective. While eligibility and compliance decisions can seem fairly static, changes are often outside of the control of an organization and can be imposed at short notice. Validation Is this claim on invoice valid for processing Validation decisions almost always operational, they are overwhelmingly rules-based, and the rules are generally fixed and repeatable. Validation is often associated with forms and online versions of these forms are of little use without validation. The move to mobile apps makes validation even more important. Calculation What is the correct pricerate for this productservice Calculations are usually operational and they are overwhelmingly rules-based. The rules are generally fixed and repeatable but making them visible and manageable using business rules pays off when changes are required or when explanations must be given. Sadly calculations are often embedded in code. Risk How risky is this suppliers promised delivery date and what discount should we insist on Making a decision that involves a risk assessment, whether delivery risk or credit risk, requires balancing policies, regulation and some formal risk analysis. The use of business analytics to make risk assessments has largely replaced gut checks and predictive analytic models allow such risk assessments to be embedded in systems. Fraud How likely is this claim to be fraudulent and how should we process it Fraud detection generally involves a running battle with fraudsters, putting a premium on rapid response and an ability to keep up with new kinds of frauds. Managing the expertise and best practices required to detect fraud using business rules gives this agility while predictive analytics can help with the kind of outlier detection and pattern matching that increases the effectiveness of these systems. Opportunity What represents the best opportunity to maximize revenue Especially when dealing with customers, organizations want to make sure they are making the most of every interaction. To do so they must make a whole series of opportunity decisions such as what to cross-sell or when to upsell. These decisions involve identifying the best opportunity, the one with the greatest propensity to be accepted, as well as when to promote it and where. A combination of expertise, best practices and propensity analysis is required. Maximizing How can I use these resources for maximum impact Many business decisions are made with a view to maximizing the value of constrained resources. Whether it is deciding how best to allocate credit to a card portfolio or how best to use a set of machines in a production line, deciding how to maximize the value of resources involves constraints, rules and optimization. Assignment Who should see this transaction next Lots of business processes involve routing or assignment. In addition when a complex decision is automated it is common for some percentage to be left for manual review or audit. The rules that determine who best to route these transactions to and how to handle delays or queuing problems can be numerous and complex, ideal for managing in a Decision Management System. Targeting What exactly should we say to this person In many situations there is an opportunity to personalize or target someone very specifically. Combining everything known about someone with analytics predicting likely trends in their behavior and best practices, and constraining this to be compliment with privacy and other regulations, individuals can feel like the system is interacting only with them. The rest of this chapter will focus on specific use cases that have been handled using Decision Management Systems. Every one of these examples has been automated and represents a client either of Decision Management Solutions or of one of the vendors in the report. If you want to know more about any of them, email us and we will connect you with additional information. The use cases in this section are divided up into a number of categories. Some of these are verticals, such as the section on government operations, while others are focused on categories relevant to multiple industries such as fraud detection or personalization. Within each section a number of real examples are explained but this is not an exhaustive list of possible use cases. Many organizations suffer losses from fraud and abuse. These range from fraudulent claims for services that were never performed, to applications for credit for people that dont exist, to orders that include bribes and illegal payments. In every case an organization must decide whether to accept the transaction as valid, reject it or investigate it for fraud. These decisions are high volume as they must be made for each transaction and are ideal for automation using a Decision Management System. Fraud detection systems typically involve business rules for compliance with policies and regulations as well as predictive analytics to match the current transaction to patterns known to be fraudulent or identify that the current transaction looks very different from legitimate ones. A wide variety of fraud detection and handling Decision Management Systems are built and fraud detection is one of the primary use cases for Decision Management. Specific examples of use cases are listed below and it should be noted that all these decisions are increasingly combined into an integrated fraud management system. Transaction is fraudulent The basic fraud detection use case. Organizations will withhold payment, withhold partial credit or decline a payment to prevent fraud. Suitable transactions include warranty claims, insurance claims, credit card payment, auction payment, tax returns and many more. Besides the basics of declining or only partially paying, some Decision Management Systems will identify transactions that require follow-up, such as a call from your credit card issuer, even though the transaction was accepted. Application fraud A variant on transaction fraud is application fraud. For instance when a consumer or organization is applying for service, especially one provided on credit or involving other risks to the provider, a Decision Management System can be used to determine if the application is fraudulent and how to handle it in terms of review or rejection. Identity Fraud When someone applies for a service or product, or makes a transaction, it is important that they are who they say they are. At other times, also, the use of a Decision Management System to identify potential identify fraud is highly valuable. Such systems can be part of preventing application or transaction fraud but can also be used independently, such as for security or access control. Supplier or provider is fraudulent Even when a transaction appears valid it is possible that it is associated with a provider of a service or good that has a pattern of behavior that suggests fraud. Decision Management Systems are used to identify those suppliers or providers of service, in healthcare for instance, that have a pattern of such behavior so that even apparently valid transactions can be reviewed before being paid. Fraud network The newest Decision Management Systems in fraud are focused on fraud networks. These decide if the combination of customers, suppliers, inspectors and auditors, or the combination of doctor, patient, pharmacist and claimant together represent a fraud risk. Each of the individuals may seem fine, and the transaction likewise, but the network is fraudulent. Another well established area for Decision Management Systems is that of underwriting and origination. Whether originating loans, mortgages or credit, or underwriting insurance, these decisions offer a strong use case for Decision Management Systems. Often regulated and constrained by policy, these decisions can be effectively managed using business rules. An assessment of risk is often critical to deciding what price or terms to offer: higher risk customers must provide more documentation or pay a higher interest rate. The use of predictive analytic models to predict risk in these circumstances is also well established. Combining these business rules and predictive analytic models into a Decision Management System is a very effective tool for automating the underwriting decision. A series of set of decisions are typically involved in originating or underwriting and Decision Management Systems have been built for many of these. An initial calculation of the likely price drives a quoting decision. Some Decision Management Systems provide only an estimated quote while others use more complex decisioning, including a risk assessment, to produce a bindable or committed quote that the company is willing to stand behind. Estimated quotes are often easier to generate with less data, making them appealing to users in a hurry while bindable quotes typically involve more data input and time but are more solid. Underwriting Underwriting or originating the loan or insurance product typically involved applying both rules (from regulations and policies) and making some kind of risk assessment (credit risk, insurance claim risk etc) by predicting the likelihood of one or more bad outcomes using predictive analytic models. Such systems often replace manual decision-making, improving consistency, removing bias and freeing up underwriters or loan officers to focus on complex cases and the overall process. Some forms of origination and underwriting are sufficiently complex (commercial loans and insurance for instance) that the role of Decision Management Systems is largely in helping a human user either by making some of the component decisions within the overall decision or by at least eliminating options or choices that are not allowed in the circumstances. Pricing a loan or policy is sometimes a separate calculation decision managed by a Decision Management System whether or not the decision to underwrite is automated. These are typically based purely on calculation rules. Payroll deduction calculation When applicants are approved for insurance or loans there may be additional calculations that can be automated. One example is a Decision Management System to calculate appropriate payroll deductions and the tax implications of same. How to approve this request Some organizations are not interested, willing, or able to automate the decisions themselves. Even in these circumstances Decision Management Systems can play a useful role. Organizations have built Decision Management Systems to manage the approval process (applying regulations and restrictions on how approval is managed and who is involved), to identify the forms and proofs necessary prior to approval and more. Even when the business decision is left to a human user, Decision Management Systems can improve throughput and efficiency. By handling decisions such as readiness (do we have all the paperwork we need), assignment and routing they can make the manual decision-making flow more quickly and efficiently. The use of Decision Management Systems to focus marketing efforts more effectively is becoming increasingly common as the cost of building and operating Decision Management Systems drops. Where in the past the value of each individual decision had to be quite high to justify a Decision Management System (thus fraud and risk-based decisions dominated), modern platform technologies and pre-configured Decision Management Systems can be used even when the value of each decision is very low. For instance the difference between a good cross-sell decision and a bad one may not be very great, while the difference between a good loan origination decision and a bad one may result in thousands of dollars of losses. Organizations focused on becoming customer-centric are increasingly turning to an approach known as next best action or next best activity (some more grammatically precise organizations talk about best next action). Such an approach involves considering every action that the organization could take towards a customermaking a cross-sell offer, collecting new information about customer preferences, reminding them to use a product they already ownand ensuring that each opportunity for interaction uses the best one for long term customer value. This focus on actionsnot just offersand a desire to centralize and systematically improve the selection of the best action drives a need for a Decision Management System focused on this decision. While these are not limited to marketing actions (they include service and support issues) they are typically rooted in Marketing. See also personalization below. Targeted Marketing Organizations are trying to ensure that their marketing is more relevant and targeted. They are dividing customers and prospects into increasingly small segments using analytics and then focusing messaging on these segments. Combing business rules and predictive analytics to effectively target every prospect, this approach to targeted marketing relies on a Decision Management System at its core. The need to replace blanket campaigns that send the same offer to everyone with something more focused drives the need for a Decision Management System. Next Best Offer The classic marketing Decision Management System is to calculate the next best offer for a customer. Such systems apply best practice and contact rules as well as predictive analytic models for propensity to buy to determine which of a companys products are most appropriate as the next product for a customer. This then drives promotional activity. Cross-sell Related to the next best offer approach is the use of Decision Management Systems to drive cross-sell. Companies are developing these systems to suggest appropriate cross-sell offers in call centers as well as driving them into the check-out process online. Some are even using them in store locations. Improved cross-sell drives higher basket value and can improve loyalty by creating customers with more (product) connections to a company. These decisions are increasingly managed across product lines or lines of business, further increasing the value proposition of a centrally managed system. In an almost identical fashion, companies are using Decision Management Systems to identify a product from within the line of business that is more profitable or advantageous than the one the customer is currently planning to buy. These systems tend to stay within a line of business and are evolving from being rules-based to including analytics to predict what is likely to be accepted so that upsells are not made when they will simply irritate a customer. Customer Next-Best-Action As noted, some organizations are evolving their marketing and support systems to a next best action approach. These Decision Management Systems coordinate all possible actions (sell this additional product, encourage use of this service the customer already has, recommend this product fix or FAQ answer, ask the customer for this clarification on their data) and selects the one that is most likely to move the customer conversation along and build long term value. These systems involve business rules about who to contact and when as well as definitions of product or action eligibility while predictions of propensity to accept and of likely future profitability are at the heart of effective choices. The marketing department typically drives much of this but customer service and support must be involved also if the system is to truly focus on next best action. Determine coupon Some businesses rely on coupons and on getting coupons (whether paper or electronic) into the hands of customers who will use them in a way that boosts a companys bottom line. Decision Management Systems are used to determine which customers are eligible for which coupons and increasingly to focus coupon spend analytically where it is likely to have the most impact. Personalize offer Marketing in some organizations is moving beyond segments and standard offerswho gets which offerto a focus on personalization. These organizations are personalizing their interactions with customers and prospects using everything they know about a prospective customer. Moving beyond just using names and locations, these Decision Management Systems are making a micro decision about each prospect, generating messages and contact strategies specific to that prospect. Not which customers get this offer but what should we say to this customer right now. Change or prevent behavior Some Decision Management Systems send communications designed to change behavior on the part of prospects or customers. These are not necessarily focused on offers or products but send specific information designed to provoke a short term or long term change in behavior. For instance Decision Management Systems have been built to target someone to increase the likelihood they will make a bequest, to increase their loyalty, to reduce the likelihood they will churn and more. These systems use predictive analytics, to identify those most likely to make a bequest for instance, and then use the factors that drive this model to see what content or communication might influence others to do likewise. These systems can also be very real-time and responsive, responding rapidly to competitors and identifying those customers who will be impacted by a change and targeting them with content most likely to counteract that competitors behavior and so sustain loyalty. Personalization may seem like the realm of Marketing but in fact Decision Management Systems have been used to drive a wide range of personalization beyond that used in marketing offer management. These systems take what is known about a userinformation about them, past history, preferences and increasingly predictions about their likely interests and future behaviorand uses this to personalize some interaction with them. These Decision Management Systems replace one size fits no-one interactions with intensely targeted interactions, allowing users to feel that they are known and helping navigate increasingly large pools of information. Determine relevance of content As organizations try to help users navigate huge volumes of content, Decision Management Systems are increasingly being used to decide what content is relevant to a particular user. Such systems are often necessary as traditional agents or intermediaries are no longer available. In travel, for instance, travel agents used to act as a filter on content for travelers. Now, with more people booking on line, a Decision Management System is needed to provide that same filtering as otherwise there is simply an overwhelming amount of information to review. Customize advice While there is a lot of generic content available today, consumers increasingly look for advice customized to them. By applying expert rules and analytics, Decision Management Systems can be used to customize the advice being given. For instance when giving people weight loss or pain management advice, the results of a questionnaire can drive sophisticated rules and analytics based on medical best practices and research to produce advice that is tailored, specified and relevant. Configure Offer, product or service In similar fashion some organizations are using Decision Management Systems to configure offers, products or services. Whether computers that are assembled from a wide variety of parts, trucks that can be ordered to meet personal needs or vacation packages, it is often a non-trivial exercise to determine that a particular configuration is allowed or buildable. Decision Management Systems are used to suggest configurations, to match configurations to stated goals and to confirm a custom configuration. Optimal price for this customer As dynamic pricing has become more common, determining the optimal price for a specific customer has likewise become more common. Using a Decision Management System to correctly price a product or service for a specific customer based not only on their needs and configuration but the value they will place on the product and potentially their ability to pay allow companies to maximize the value of their sales. As more data and more sophisticated systems become available this is focusing on individual customers not just customer segments for truly personalized pricing. Collections, chasing down those who owe money to the organization and collecting it, is a complex problem. Traditionally handled with large teams of people 8220dialing for dollars8221 and a first-in, first-out or highest dollar value approach to prioritization, collections can be made dramatically more effective using Decision Management Systems. See also personalization above. Next best action Some forward looking organizations are using Decision Management Systems to assign collections agents to work dynamically. Instead of having each agent work through their own queue, these systems dynamically prioritize the available collections work and assign it to agents as they become available. Using everything known about the overdue payment, predictions of the likelihood that someone will pay and even the skills of the collection agent, these systems determine the next best collection action. There is a general move toward next best action systems across the board. Whether it is actions for customers, actions for collections agents, audits or quality reviews, focusing limited resources on the next best action adds value when it replaces traditional, first-infirst-out systems. How to handle non-payment Even when using standard queuing and assignment systems, collections organizations can benefit from Decision Management Systems. In particular the use of business rules and predictive analytics to determine the most appropriate way to handle non-payment situations is effective at reducing unnecessary calls and increasing collection rates. By identifying those most likely to simply have forgotten and prioritizing a simple reminder, by predicting the amount someone can pay and the likelihood they will stick to a commitment, as well as by ensuring consistent application of collections policy, Decision Management Systems can dramatically improve the way non-payment situations are handled. While many of the scenarios identified as candidates for Decision Management Systems are commercial, government operations can also use them to improve the effectiveness and efficiency of public sector organizations. More heavily focused on the use of business rules to enforce regulations and associated policy, public sector Decision Management Systems can improve consistency, provide enhanced self service for citizens and demonstrate compliance. The growing use of predictive analytics in these systems can also help target constrained government resources where they will do the most good. Benefit eligibility Perhaps the most common Decision Management System in public sector, the use of a rules-based system to determine who is, and who is not, eligible for a benefit or service has clear benefits. Not only is the system consistent, always applying the same rules, it is available 242157, improving access for all. Such a system can also be readily changed when regulations change or even when court cases demand exceptions or updates. Benefit calculations Related to eligibility is the calculation of benefits. While some benefits are straightforward to calculate, others can be very complex. When multiple factors must be considered, complex questionnaires processed and tax returns consulted to determine the correct value of a benefit, a Decision Management System can dramatically improve both response time and accuracy. Tax or fee calculations Government agencies must often make complex calculations of taxes or fees (such as vehicle license fees or business registration fees) owed. These calculations can get complex and, perhaps even more importantly, are much more prone to change than the systems and processes of which they are part. By separating out the calculation as a Decision Management System an agency can create a stable process, for registering cars or handling tax returns, while retaining the ability to make rapid and effective changes to the calculation. Permits or other paperwork needed One of the most frustrating processes for citizens is often determining which permits or paperwork is needed for a particular activityto modify a house for instance or apply for a grant. Using a Decision Management System to help citizens navigate these kinds of decisions reduces their frustration and allows limited resources to be applied to solving problems not discussing paperwork. Putting the decision firstdeciding what paperwork is required and then processing itcan also dramatically simplify the processes involved. Submission completeness and approval As government agencies have developed online interfaces for forms, allowing citizens to submit paperwork electronically, they have created the opportunity for new uses of Decision Management Systems. If a form can be submitted electronically then a Decision Management System can be used to check that it is complete. It can also do so intelligently, using data entered in one part of the form to make sure that other parts are filled out correctlymoving far beyond simply mandatory fields or defined ranges for values. Predictive analytics can be added to flag potential fraud where appropriate, allowing the automatic processing of complete, low-risk applications and manual review of others. Audit selection Many government departments must decide who to audit. These audits often uncover unpaid taxes, fraud or abuse. Yet the groups that conduct these audits are constrained by budgets and headcount limits in ways that mean that not all potentially useful audits can be conducted. A Decision Management System can use expert rules as well as predictive analytics to prioritize the most potentially valuable audits and do so with transparencythe rules being applied will be clear so there is no chance of bias or favoritism. Some organizations have even gone to a next best audit approach, dynamically assigning auditors to investigations as they become available. Targeting resources Another use of Decision Management Systems to maximize the value of resources comes in targeting scarce resources where they will do the most good. Police forces can assign patrol cars or beat offices to neighborhoods, educational authorities can assign advisors to at-risk students, and social services can assign case workers using Decision Management Systems. These can apply not just policy and best practices but also predictions of risk (of crime, dropping-out of high school etc) and of the most effective intervention to maximize the value of resources in terms of overall results. Identify fraud, waste and abuse Finally there are many ways to use Decision Management Systems to identity fraud, waste and abuse. This includes identifying fraudulent tax returns, providers who are inefficient users of government grants and even of people making unnecessary emergency calls. By flagging these transactions and individuals, Decision Management Systems focus government budgets where they will help most, reducing the cost of a given level of service. See also fraud detection above. Decision Management Systems are just beginning to penetrate supply chain management. There is tremendous potential for Decision Management Systems in this area, particular as organizations look for ways to bring predictive analytics to bear on their supply chain. By focusing predictive analytics on specific orders or shipments, Decision Management Systems make it possible to effectively apply more advanced analytics even in very complex supply chains. Many examples exist despite a low overall penetration rate. Eligible supplier One of the most basic use cases for Decision Management Systems in the supply chain is that of determining eligible suppliers. For organizations with large numbers of suppliers, especially for those where commodity products are sourced from many competing suppliers, the automated determination of eligible suppliers can be a big time and cost saver. Allowing organizations to determine for themselves if they could become a supplier and allowing each country or product line to add its own additional criteria for eligibility are additional reasons for using a rules-based approach to determine eligibility in a flexible way. Best supplier selection While supplier eligibility can be made more efficient using a Decision Management System, it is also possible to become more effective in the use of suppliers by automating the selection of the most appropriate for a given order. Using both eligibility rules and predictive analytics that show the likelihood of on-time and to-specification deliveries for example can create a system that automatically selects suppliers based on the right balance of cost, speed and quality given the circumstances of the order. Such systems improve straight through processing, reducing the need for human involvement in increasingly complex supply chains. Routing and Shipping Selection As supply chains become more complex and distributed, it is also increasingly hard to know how to route a delivery or what shipping mechanism to use. When those shipping the order are not those paying for it, as is often the case when many small manufacturers are tied into a global supply chain, real-time determination of the right thing to do is essential. A Decision Management System can apply best practices, policy, short-term deals offered by shippers, current traffic problems and more to come up with the best shipping option and the best routing, reducing costs throughout the supply chain. Reorder levels and alerting While many supply chain systems have automated thresholds for re-ordering or for alerting a user that stock is low, these are often simplistic. The reality of a modern supply chain is that the amount of stock, and where that stock should be, is highly variable. It can depend on the season, on trends in sales, on competitive behavior, on marketing campaigns and much more. Using a Decision Management System allows multiple sources of rules to be applied to the decision, allows the integration of predictions and forecasts, and supports short-term adjustments and tweaks as necessary. Many organizations must use assets, fixed plant for instance, as effectively as possible if they are to operate profitably. The use of a Decision Management System to improve decisions about such assets is still relatively unusual but there is a growing set of examples. Particularly as more equipment is instrumented and connected to a network, the value of a Decision Management System for making targeted decisions specific to each asset is rising. Service needs One of the most basic uses for a Decision Management System is the identification of service needs. Today most assets are serviced on a fixed basis. However as usage data is collected for a specific machine or piece of equipment it becomes possible to calculate unique service needs for that piece of equipment. Thus a tractor being driven more aggressively, though traveling the same number of miles and the same age as another, will be identified as needing service more often. This keeps equipment healthy longer while also eliminating unnecessary services. Validate usage This same increased instrumentation is driving remote monitoring and advice to new levels. When a piece of equipment is constantly monitoring its own usage and logging this information, a Decision Management System can be used to check that the usage is appropriate. For instance if heavy equipment is being left idling too much during a particular shift or if a particular operator is heavy handed in some way, this can be flagged and remedial actions suggested. This uses a Decision Management System to provide supervision through the remote logging. Service needs and potential failures can also be flagged and alerts issued. Preventative Maintenance Failures and problems with expensive assets can result in extensive, and costly, downtime. The ideal for many organizations is to fix things before they become critical to minimize the risk of such downtime. Decision Management Systems can use predictive analytics to identify assets at risk of failure and then use rules to assign an engineers spare time to check the asset or extend a scheduled visit to proactively fix something early. Proactive use of asset If assets are not in continuous usage then there is a potential opportunity to use the asset for some other activity during its down time. Deciding what to do with otherwise idle assets is increasingly something that can be automated using a Decision Management System based on the prediction of the likely value of the various possible actions. Manufacturing is another area where Decision Management Systems are being introduced. Many manufacturing operations are large scale with huge numbers of potential decisions to be made. As customers demand more customized products, organizations must also customize at scale. This means that tasks and work allocations that used to be identical across production runs must be customized and tweaked for different customers or batches. Decision Management Systems offer a new level of control. See also process efficiency below. What to make One of the most basic decisions is that of what to make. When an organization manufacturers for stock, rather than specifically for an order, it must constantly choose what to make and what not to make, what colors to pick, what packaging sizes to use and so on. A Decision Management System can use predictions and forecasts, current stock levels and more to decide what is most appropriate to make at any given moment. Allocation and configuration of machines Especially in complex manufacturing situations, the allocation of work to a machine and the configuration of that workstream can be critical to the overall effectiveness of the line. Many machines might be capable, if configured correctly, of handling specific tasks and specific tasks might be assignable to many machines. Decision Management Systems can handle the complexity of this kind of situation, applying the rules that determine which machines can do what and combining them with predictions and even optimization to ensure maximized results. Reducing manufacturing problems In complex manufacturing scenarios there is a constant risk that problems might be introduced to the finished product. The wrong part may be used, something may be damaged by the production process or a task may take an unexpectedly long time. Using Decision Management Systems can reduce these problems. Systems can assign quality improvement actions to the QA team, replacing fixed check lists with dynamic next best check systems while also assigning supervisory and training resources for proactive mitigation of potential problems. Predictions of risk, rules about skill levels required and certifications and much more can be used to drive increasingly sophisticated decision-making on the production floor. Governance, Risk and Compliance is a broad topic that is a serious area of focus in many regulated industries. Ensuring that everything is done according the regulations, enforcing and managing a governance environment, and tracking and accounting for all appropriate risks can be a daunting task. Attempting to do all this manually is prohibitively expensive. Decision Management Systems provide the leverage organizations need to effectively manage their GRC approach. Data Management When it comes to data regulations can prescribe what data should be stored, what cannot be stored, what must be anonymized and much more. Who can access the data, under what conditions and with what degree of supervision may all be spelled out. Reporting can be specified too, documenting what must be reported to whom and by when. All of this data management can be enforced and managed with a Decision Management System, avoiding fines and reducing overhead. Using rules to automatically manage all the explicit guidance and integrating analytics to help with detecting identify fraud makes keeping data safe, secure and appropriately available practical. Unlike manual processes, Decision Management Systems scale up quickly and respond in real-time as data flows through your systems. Authorization Another significant issue in GRC is who can approve who and what. Preventing people from (even indirectly) approving their own expenses, trades or data access is important and increasingly complex in matrixed organizations. Again identify theft prevention is critical if authorization schemes are to be robust and believable. Ensuring that a single coordinated set of business rules drives authorization across multiple systems is a great role for a Decision Management System. If, despite everything, things go wrong, GRC systems need to alert the right people, give them the right information and do so quickly enough to avoid additional fines and problems that can result from delay. Rather than pushing dumb alerts to someones desktop (and hoping they respond), a Decision Management System can act automatically and work its way through the right set of escalations and notifications, even when this chain of events is complex and changes often. Healthcare is an industry increasingly rich in data. Yet simply applying analytics, doing simply what the data tells you, is not really practical in an industry where peer review, published best practices and government regulation abound. Decision Management Systems, with their combination of rules and analytics, are ideal for this environment. As more of the healthcare industry is computerized and more data is collected, Decision Management Systems are playing an increasingly important role. As healthcare goes mobile, helping patients live at home and treat themselves, this is only going to increase. Identify drug interactions and other issues One of the most common uses of Decision Management Systems in healthcare is to identify potential problems in prescriptions. Identifying potential drug interactions and checking dosages prescribed against patient details involves large numbers of rules gathered from best practices, medical research, drug companies and more. Providing these checks in the hospital as nurses administer drugs, at the pharmacy as prescriptions are fulfilled and warning doctors about potential issues can all be driven from the same rules ensuring consistency and reach. Determine treatment The best practice in healthcare evolves continuously. New therapies, new suggestions, new drugs and new ways to match a patient to a therapy, using genetic matching for instance, make it hard for medical professionals to stay up on the latest treatment. Especially when multiple possible treatments can be proposed, selecting the one most likely to work for a particular patientpersonalized medicineis complex. Decision Management Systems engage medical professionals in managing their own rules, bring analytics to bear as data is gathered regarding what works, and easily stay current as best practices and guidelines change. Target at-risk people While we might wish we could always apply all the resources that might help to a medical problem, the reality is that we cannot. Determining which patients are most at risk and what kinds of interventions are likely to have the biggest impact is a fact of life for most healthcare organizations. Using analytics, especially predictive analytics, as well as expert rules and best practices, a Decision Management System can ensure resources are applied effectively to those most at risk. Scheduling Healthcare, like many labor intensive industries, involves complex schedules. Making sure that the relevant specialties are available at the right time and place, managing staffing to match demand, ensuring that operating rooms are prepped before they are neededall this makes scheduling in healthcare difficult. Decision Management Systems can use rules and optimization to come up with the most effective schedules possible, given the constraints, saving money and lives at the same time. To wrap up this discussion of use cases, some very generic examples. Many industries share common problems that can be lumped into a focus on process efficiency and effectiveness. Decision Management Systems, by handling critical decisions in those processes, can make a big difference on both counts. Some examples follow. Validation Has enough information been entered Does it match other information available and is it internally consistent This kind of rules-based validation is common to many processes and using a Decision Management System to automate this check speeds processing and reduces manual overhead. Completeness and readiness Many processes have steps that are more expensive, such as conducting an inspection or writing a contract. By automating a check to see if the process is ready to go to the next stepdo we have all the information needed to effectively inspect this ship or building, to put a contract together for this deal or annuityDecision Management Systems ensure that processes only move on when it makes sense to do so. Plausibility An interesting variation comes in situations where only a human can really tell if something is true or not, such as a customs declaration. A Decision Management System might use rules and analytics to determine how plausible such a declaration is, helping focus limited resources where they will do the most good. Assignment or allocation Many processes involve assignments and allocations: decisions about who to make responsible, how to allocate the work involved, and who should do what. When processing speed is important, or when consistency and traceability are a must, a Decision Management System can provide rapid, agile, compliant processing. Sequencing and Adaptive Case Management Many processes are increasingly modeled using a more adaptive approach. Instead of laying out all the steps and branches, different clusters or groups of tasks are identified that may need to be handled for a particular transaction or case. Deciding which need to be included, and when, is a task ideally handled by a Decision Management System that is monitoring the case and constantly evaluating the most appropriate and necessary steps for the case. Dynamic forms Collecting data from people is a constant challenge. While sometimes a simple form or a form with a few options works well, sometimes it is very difficult for a user to determine what data is required. Each question they answer drives the need to answer, or not answer, subsequent questions. This kind of dynamic questioning is another good user case for Decision Management Systems. Dynamic Checklists Checklists are a powerful tool for improving the effectiveness of staff members. But to work a checklist must be very specific. Trying to handle even a small number of situations with a single checklist can make for complex checklists with lots of navigational instructions to make sure the right items are checked at the right times. Instead a Decision Management System can be used to drive dynamic checklist. Very specific checklists generated for each circumstance. All the checks needed for that circumstance but only the ones needed. Besides these specific use cases, there are some key characteristics of decisions that make them suitable for automation using Decision Management Systems. These are discussed more fully in the book but suitable decisions have four characteristics: Repeatable. If a decision is not made in a repeatable way and made regularly it will not be possible to automate it nor to show a return on doing so. Non-trivial. A decision must have a degree of complexity to make it worth the investment in additional capabilities discussed above. There must be policies or regulations that drive and control the decision, a degree of expertise involved in making it well or some analysis of information required. Measureable business impact. It must be possible to tell what the business impact of improving the decision will be, and even what a good decision is relative to a bad one. If the value of improvement cannot be described or worse yet the value of a decision cannot be measured at all then it will not be possible to show the value of a Decision Management System. Suitable for automation. Every organization has a different attitude to automation. Unless the organization is willing to consider a system to make the decision there is no point in building a Decision Management System. A decision that must be taken by a person might involve dependent decisions that are suitable candidates for automation but building a Decision Management System to automate a decision that an organization believes should be taken by a person will result in a system that does not get used. Suitable decisions often break down into rules-centric decisions such as eligibility, validation and calculation and analytic-centric decisions such as those relating to risk, fraud and opportunity. Big Data is often described in terms of an increase in volume, an increase in velocity and an increase in variety: More data, of more types, arriving more quickly. This increase in Big Data volume, variety and velocity has clear implications that are driving the business case for decision management systems. In an era where we must handle more data, and where the rate of increase in data is itself increasing, we have to face the limitations of human interpretation. Where we might have once assumed that a person could look at and usefully interpret all the data that might be relevant this is increasingly impractical. Big data volume has two main implications for Decision Management Systems. First it makes the case for automating decisions stronger. Computers are generally much better at looking at lots of data and at doing so quickly enough to be useful. They can be set up to balance recency versus long term trends and avoid many of the data interpretation problems that beset human decision-makers. As data volumes accelerate into the stratosphere, Decision Management Systems are your allies in making sense of all this data. Second it makes the case for industrializing analytics stronger. The basic premise is that as there is more data so you need more analytic models and you need those models to be built more efficiently. This means applying more automation and more technology in the process of building the models themselves. This could be through machine learning, through fully automated modeling capabilities or through automation added to tools for data scientists. It also means applying the latest in in-memory and in-database technologies to decrease the time all this modeling takes. The days when an individual modeler could hand craft a complete model, sampling data carefully and doing every step by hand are gone. Big Data involves adding more types of data, from more sources inside and outside of the organization, to your analytic toolkit. Social, mobile, local and cloud data sources are exploding and organization must find ways to take advantage of these before their competitors do. This means that the old approach of pulling together all your data into a 360 degree view simply wont work any more. You will never get caught up as there is ALWAYS going to be another potentially useful data source. Instead first model the decisions that impact your business and focus on integrating and delivering the data sources you need for a given decision. Variety also has two particular implications for Decision Management Systems. First it means you have to broaden your definition of data infrastructure. Many (most) Decision Management Systems rely on an operational datastore that is relational and use analytic models built entirely from structured data. With the explosion of new data sources, often unstructured or semi-structured formats, this is not going to work anymore. Your analytic team is going to need to be able to access data stored in a variety of formats (stored on Hadoop for instance) and your operational systems may need to consume less structured records and make decisions against them (what to do with this sensor record, for instance). Same problems (how to build analytics, how to make decisions) but lots of new data sources to deal with. Second you will need to improve your skills in text analytics and entity analytics. Being able to identify what is being discussed, especially what products or actions, are being discussed in unstructured, text data sources is key. You need to be able to tell that this email is about this product, that this customer keeps talking about the call center, etc. and feed that insight into your modeling and your Decision Management Systems. As more data arrives more quickly we have to deal with velocity in two ways. We have to decide more quickly and we have to deal with data in motion streaming data not just data at rest. The first of these, like the increase in data volumes, simply increases the value of a Decision Management System. As our data arrives more rapidly the value of processing and acting on that data in real-time is likely to grow. This need for real-time responses pushes us inexorably towards Decision Management Systems because people just dont make decision in real-time. As real-time becomes the right-time, we must automate decision-making. This increase in velocity also tends to make decision value decay more quickly. The value of a decision decays over time (decision latency) and the increasingly rapid arrival of new data means that decisions will decay faster as new data will make the old decision less relevant. This has a side effect of also making predictive analytics more valuable. With less time to decide it becomes more important to have some predictive headroom the further out I can see the more time I have to respond. With slow moving data it might have been enough to see yesterdays summary or todays. As data moves more rapidly we must see in the future, make predictions, if we are to have time to respond. The second implication of velocity is that we must get better at injecting decision-making into streaming data. We have to be able to package up business rules and analytics and inject them into a data stream so that we can enrich the stream with decision answers or so that we can kick of parallel processes as the stream flows by. These require different deployment metaphors with lower latency and more state management capabilities. The growing ability of business rules management systems to integrate with event handling and the deployment of analytic models into streaming data infrastructures are just two of the developments supporting this trend. The Volume, Variety and Velocity of Big Data are going to drive more demand for Decision Management Systems, put additional pressure on those building analytics for Decision Management Systems and mean we must expect more of the technologies we use to build them. Analytics Capability Landscape Organizations today are turning to analytic capabilities to drive decision-making. But with the different types of decisions that need to be made, multiplied by the different types of analytic capabilities available, it can be difficult for organizations to choose the right capabilities for the situations at hand. Some decisions require simpler capabilities, while others require complex capabilities that need the support of a talented IT team. Additionally, organizations also need to consider the user of the analytic capabilities. While some users have the experience and skills to leverage tools that require programming and heavy analysis, others may benefit more from a simpler drag-and-drop graphical user interface. Choosing a tool that the intended user is unable to get the most from no matter how great the tool itself is means that the tool will go unused, affecting decisions across the organization. With the tools comes the ability to handle Big Data as well. No longer a buzzword, the era of Big Data means that analytic capabilities must extend to both structured and unstructured data, parsing the information to assist organizations with informed decision-making. Ultimately, any analytic capabilities used by the organization must align with business needs, be geared toward the intended user, and support decision-makers, both human and automated. To help answer these questions, Decision Management Solutions completed research in the Fall of 2014, examining the different types of analytic capabilities available to organizations and the business situations for which they are most appropriate. The complete report and associated infographic can be download free of charge from the Decision Management Solutions website at decisionmanagementsolutionsanalytics-capability-landscape Shifting the Analytic Focus Historically, the focus for most organizations has been on reporting. Recently, this has shifted to a balance of reporting and monitoring. With the growth of Big Data Analytics as a focus for companies and companies desire to become data-driven, the next 12-24 months is likely to see a shift toward decision-making as a focus. For instance, a live poll showed that three quarters of respondents were focused on reporting or monitoring today and were evenly split between the two. But fast-forward 12-24 months, and almost eighty percent of respondents are focusing on decision-making instead. A New Approach Given the wide range of analytic capabilities available, the various roles that can be involved in using these capabilities, and the range of analytic styles available, navigating the analytic landscape requires a new approach. This approach is decision-led, role-centric, and style-based. It allows an organization to navigate the analytic landscape, selecting appropriate analytic capabilities for each decision-making problem based not on the kind of analytic capability but on its fit for the intended purpose. By focusing on the kind of decision-making problem and on the role(s) involved in solving the problem, organizations can identify a suitable style of analytic capability 8211 descriptive, diagnostic, or predictive 8211 to ensure the chosen capability will be used effectively. Navigating the Analytics Capability Landscape Decision-Led As organizations shift from a focus on reporting and monitoring to one focused on decision-making, the most important thing to know about each project is which decision(s) are being targeted for improvement. Only a clear view of the decisions will allow selection of appropriate analytic capability. The characteristics of the decision to be improved are at the heart of selecting an appropriate analytic capability. The four characteristics that matter most in describing a decision at this stage are: Volume, or how often a decision is made. Repeatability, or how similar each decision is. Latency, or how long the organization has to make the decision. Complexity, or how difficult the decision is. Role-Centric Decision-making problems can be solved in a variety of ways. The people who will be involved and their roles that will play a part in improving the decision will further constrain and direct the type of analytic capability to be used. While many roles exist in organizations, they can generally be classified as one of four types: Business decision-makers. Business analysts. IT data professionals. Analytic professionals. A clear understanding of who is going to be involved in solving a decision-making problem is the next step. Style-Based With a clear understanding of the decision that is to be improved and the role(s) that will be involved, it is possible to identify the right type of analytic capability that is required. Three elements define analytic style: Interactivity: Is the capability designed for explorers or settlers Presentation: Does the capability deliver a visual result or a numeric one Scaling approach: Is the capability a DIY one or a factory-made one This analytic style will determine if the roles involved can use the capability to solve the decision-making problem at hand far more effectively than the capabilitys position on an arbitrary maturity curve. These three analytic styles can be combined in eight ways. In practice, there is overlap between the styles, but these combinations are worth considering as examples of the kind of capabilities available. Becoming an analytic organization is going to require a broad portfolio of analytic capabilities. These capabilities will need to include descriptive, diagnostic, and predictive analytic capabilities. Different capabilities do not replace each other so much as complement each other. Different problems will require different capabilities, depending on the decisions being improved and the roles involved. Making sure that capabilities can be selected from a broad portfolio will be important for long-term success. Thank you to our research sponsors FICO and OpenText. Download the complete report and infographic here . Selecting Vendors Each Decision Management System requires different subsets of the capabilities described above. The right set of vendors and products is going to vary, depending on the requirements and needs of both the project and the organization as a whole. There are many vendors large and small to choose from, with more being added every day. The products they offer have great breadth and depth of functionality in every area. Every one of the products listed continues to evolve and grow, adding new functionality and enhancing existing capabilities. Some vendors are merging to create complete product sets or suites under one umbrella while others are collaborating to allow their products to be used together more effectively. PMML already provides some standards support for this collaboration and new standards are on the horizon that will extend this. There are a wide range of vendors available for each of the product categories you need to develop Decision Management Systems. Many organizations will have existing relationships with vendors and will use other software products they provide. Experience with clients shows that familiarity and comfort with a vendor, confidence that you can work well with them, is a strong predictor of success. Some organizations will work with Systems Integrators or other service providers who have strong vendor relationships that will likewise contribute powerfully to a successful project. The fit of the vendor(s) you select with your organization is often much more important than the specifics of their functionality. All the vendors in this report have paying customers who are successfully using their technology to build some kind of Decision Management System. There is no magic or best set of vendors or products. There is a rich set of vendors and products and most, if not all organizations could pick from multiple vendors or vendor combinations and be successful. Some things to consider: When both decision logic and analytic insight must be combined in a Decision Management System, those Business Rules Management Systems and related products that are model-aware and can consume and integrate with predictive analytic models are more likely to be successful than those that are not. A Predictive Analytic Workbench that supports a range of deployment options for predictive analytic models into production and can monitor and manage these models will generally require less integration work than those that do not. Predictive Analytic Workbenches that support in-database modeling and in-database scoring (directly or through partnerships with others) are increasingly valuable. An optimization environment that supports the generation of business rules (often through integration with data mining capabilities) as well as solving to produce a set of actions will provide more deployment options. Those components that support standard platforms and provide a rich set of APIs and thin client interfaces are generally preferred. Depending on the system involved, a focus on real-time or on batch (or on a mixture of the two) will be essential as will an understanding of the need for support for, Java or legacy platforms. The view taken by the organization of open source products may constrain or focus your selection process. This lists vendors and basic company contact information alphabetically. Although every attempt has been made to validate this information, Decision Management Solutions accepts no liability for the content of this report, or for the consequences of any actions taken on the basis of the information provided. Advertisements are the responsibility of the vendor concerned and have not been reviewed by Decision Management Solutions. The vendor section has two parts. First there is a list of vendors with their contact information. Many vendors listed above have multiple products some of which meet the criteria for inclusion in the report and some of which do not. A single eligible product is enough to get the vendor listed. The table of products that follows includes listings of individual products as well as links to First Look reviews, if any, on JTonEDM. Only the most recent product First Look is linked, though previous First Looks are generally referenced when a new one is written. Some First Looks cover multiple products and in these circumstances both products are listed and linked to the same First Look. Actian Corp ACTICO GmbH Appendix 8211 Decision Management Systems Decision Management Systems are a new class of system. Decision Management Systems bring together two kinds of systemsoperational systems that manage the transactions of the business and analytic systems that help you understand how to run the business better to deliver systems that actively work to help you run your business or organization. Decision Management Systems are agile, analytic and adaptive and are built using a three step process of decision discovery, decision services and decision analysis. Decision Management Systems deliver high ROI by reducing fraud, managing risk, boosting revenue and maximizing the value of scarce resources. There is more information on Decision Management Systems, their characteristics and the process of building them in the authors book Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics (IBM Press, 2012) 8220. This book is organized into three parts. Part I: The Case for Decision Management Systems The first three chapters make the case for Decision Management Systems why they are different and how they can transform a 21st century organization. Chapter 1, Decision Management Systems are different: This chapter uses real examples of Decision Management Systems to show how they are agile, adaptive and analytic. Chapter 2, Your business is your systems: This chapter tackles the limits of manual decision-making, showing how modern organizations cannot be better than their systems. Chapter 3, Decision Management Systems transform businesses: This chapter shows that Decision Management Systems are not just different from traditional systems they represent opportunities for true business transformation. Chapter 4, Principles of Decision Management Systems: This chapter outlines the key guiding principles for building Decision Management Systems. Part II: Building Decision Management Systems Chapters 5 through 7 are the meat of the book, outling how to develop and sustain Decision Management Systems in your organization. Chapter 5, Discover and model decisions: This chapter shows how to find, describe, understand and model the critical repeatable decisions that will be at the heart of the Decision Management Systems you need. Chapter 6, Design and implement Decision Services: This chapter focuses on using the core technologies of business rules, predictive analytics and optimization to build service-oriented decision-making components. Chapter 7, Monitor and improve decisions: This chapter wraps up the how-to chapters, focusing on how to ensure that your Decision Management Systems learn and continuously improve. Part III: Enablers for Decision Management Systems The final part documents people, process and technology enablers that can help you be successful. Chapter 8, People Enablers: This chapter outlines some key people enablers for building Decision Management Systems. Chapter 9, Process Enablers: This chapter continues with process-centric enablers, ways to change your approach that will help you succeed. Chapter 10, Technology Enablers: This chapter wraps up the enablers with descriptions of the core technologies you need to build Decision Management Systems. Decision Management Systems have three critical characteristics. These characteristics strongly differentiate them from current, mainstream business applications. Most such business applications are difficult, expensive and time consuming to change. Decision Management Systems are agile and transparent. The business applications that support most organizations are entirely separate from the analytic environment of those organizations. Decision Management Systems are both operational and analytic. Finally most business applications are designed and built to meet a specific set of requirements that is known and not expected to change. Decision Management Systems are adaptive, learning and improving as they are used. Decision Management Systems are Agile Business Agility is an overused expression in corporate IT with all manner of approaches and technologies being promoted as delivering business agility in some way. Decision Management Systems are agile because the logic in them is easy to change and easy to adapt to changing circumstances. When new policies or regulations are issued the logic that implemented them can easily be found and safely be changed. These changes dont undermine compliance because Decision Management Systems are transparent it is clear how they will work in the future and also clear how they acted in each specific historical situation. This agility allows more stable business processes, as changes are easy to make to the Decision Management Systems that support those processes, and ensures that rapidly changing know-how and experience can also be effectively embedded in systems without the danger that it will become stale and out of date. Decision Management Systems are Analytic Analytics is a hot topic in many organizations today. Yet most analytic systems are completely separate from the operational systems that run the business. These analytic systems rely on data extracted from the operational systems but are otherwise quite standalone. In contrast, Decision Management Systems deeply embed analytic insight to improve their operational behavior. Analytics are used to divide customers or transactions into like groups to allow actions to be effectively targeted. Analytics are also used to make predictions of the degree of risk involved in a transaction, the likelihood of fraud or the extent and type of opportunity available. These predictions are used to select from the available alternatives in a way that will manage risk according to the organizations guidelines, reduce fraud, maximize revenue and effectively allocate resources across competing initiatives. Decision Management Systems are Adaptive Business systems, like business people, need to constantly adapt and learn. They need to experiment and see if a new approach might work better than a long established one, challenging conventional wisdom. They must manage trade-offs in an ever changing business climate. They must allow their performance to be monitored in terms of how effective the decisions they make turn out to be. In this way Decision Management Systems are adaptive, built to respond to changing conditions and to support a process of continuous improvement through testing and experimentation. Building Decision Management Systems involves many of the same techniques, tools and best practices that building any reliable, high-performance operational system involves. All the skills and experience an organization has in developing information systems apply. The new and changed activities required fall into three phases decision discovery, decision services and decision analysis. Decision Discovery Decision Management Systems are focused on automating and improving decisions. Most organizations do not have a well defined approach for finding, modeling and managing the decisions they make. To effectively build Decision Management Systems, then, the first step is to find the repeatable, non-trivial decisions in the organization that have a measurable business impact and are therefore candidates for automation and improvement. Examples of suitable decisions include checking the eligibility of a person for a government benefit or commercial product, validating that an organization can become a supplier or meets some defined criteria, pricing a loan or other financial instrument based on an assessment of the risk involved, and making an offer to a consumer to maximize the value of an opportunity to interact with them. There are a number of ways to find these decisions. They can often be found explicitly simply by interviewing and working with business experts. The tasks in business processes where choices are being made or where there is a pause for review are typically decision-making tasks. Many branches in processes are preceded by decision-making. Decisions can also be found by analyzing Key Performance Indicators and other metrics to see what choices make a difference to those metrics. It is unusual for something to be tracked as a metric if there are no choices made that cause it to go up or down. The top level decisions that these approaches find should be described, primarily by defining a question that must be answered to make the decision along with the allowed or possible answers. For instance, a claims review decision might answer the question Is this claim fraudulent and what should we do about it with allowed answers including routing it to the fraud investigators, putting it through a regular claims review or fast tracking it for immediate payment. Top level decisions can and should be decomposed into the subordinate decisions they are dependent on the smaller decisions that must be made before the top level one can be made. This decomposition is recursive and provides necessary detail on how these decisions are actually made day to day. Decision Modeling Decision modeling is a powerful technique used in Decision Discovery to capture decision requirements. Decision modeling has four steps that are performed iteratively: Identify Decisions. Identify the decisions that are the focus of the project. Describe Decisions. Describe the decisions and document how improving these decisions will impact the business objectives and metrics of the business. Specify Decision Requirements. Move beyond simple descriptions of decisions to specify detailed decision requirements. Specify the information and knowledge required to make the decisions and combine into a Decision Requirements Diagram. Decompose and Refine. Refine the requirements for these decisions using the precise yet easy to understand graphical notation of Decision Requirements Diagrams. Identify additional decisions that need to be described and specified. This process repeats until the decisions are completely specified and everyone has a clear sense of how the decisions will be made. At this point a requirements document can be generated, packaging up the decision-making requirements identified. This can act as the specification for business rules implementation work or for the development of predictive analytics. Alternatively the model can be extended with decision logic, such as decision tables, to create an executable specification of the decision-making requirements. For a detailed discussion of decision modeling with the new Decision Model Notation (DMN) standard, download our white paper 8220Decision Modeling with DMN8221. You can also read more about decision modeling in the Best Practices Section. Decision Services The decision discovery step enhances traditional analysis and requirement gathering tasks. Once the decisions are identified and modeled, an iterative process of development can begin. The objective is to develop Decision Services coherent, well defined components that make a decision for the other processes and system components in the solution. Beginning by defining simple interfaces that allow these services to be asked a question and give back one of the allowed answers, an iterative approach is used to flesh out the decision making. The decomposition of the decision shows the sequencing and structure of the decisions and the business rules, predictive analytic models and optimization models required can be developed and added to this structure. A complete Decision Service will require some combination of business rules, predictive analytic models and optimization models. Most will not require the deployment of optimization models. Optimization is more likely to be applied to a large number of similar decisions with the optimal actions identified for each decision used to derive new business rules that are more likely to result in optimal decisions in the future. Some decision services will not require predictive analytic models, especially those primarily concerned with eligibility and compliance where business rules dominate. Even when analytic insight is important to a decision, sometimes that insight can be best represented with a set of business rules mined from historical data. When probabilities are needed, however, predictive analytic models will either need to be executed by the decision service, executed in the database the decision service is using or stored in the database that the decision service used if a batch update of the prediction is acceptable. Decision Analysis Decision services are developed and deployed as part of an overall systems development effort. Once deployed they must be monitored and analyzed to see if changes are required going forward. Decision services should be monitored for both proactive and reactive changes changes that might help improve performance as well as necessary changes for compliance for instance. Performance management and other analytic tools can be used to assess the effectiveness of the decision making embedded in the system. As changes are identified and proposed it must be possible to effectively assess the impact of these changes before they are deployed. It may also be necessary or desirable to design new approaches and conduct new experiments to gather new data about what works and does not. Any changes made should be monitored to make sure they work as expected. There are many ways to make a case for a Decision Management System. The cost of development of a Decision Management System plus the additional software required to develop it must be offset by a satisfactory return if a case is to be made. The top-line benefits of a Decision Management System typically come in a number of areas: Reduced losses by eliminating fraud and waste. Predictive analytic models that establish the probability of a transaction being fraudulent or wasteful can be combined with business rules for known fraud schemes and waste prevention policies to determine which transactions to reject or to refer for investigation. Decision Management Systems have an excellent track record in dramatically reducing fraud. Reduced risk exposure and better matching of risk to price. It has been said that there is no such thing as a bad risk, only a bad price. Using predictive analytic models to predict the risk of a loan or a policy and then applying business rules to correctly price for the predicted risk is a well established use case for Decision Management Systems. Decision Management Systems can manage more fine-grained risk models, with dozens or hundreds of segments, and more closely match risk to price. Increased revenue from targeted marketing and correct opportunity identification. When an organization has an opportunity it can be unclear how to maximize its value. As opportunities migrate across channels and as windows of opportunity get narrower, this gets even harder. Decision Management Systems can apply campaign and offer rules with predictive analytic models that estimate the propensity for specific offers to be accepted and to be profitable. The offer that is most suitable, most likely to be profitable can then be made even if only a short time is available. Optimization can be used when there are constraints on how many offers can be made or on capacity, ensuring maximum return on those limited assets. Deciding on the next best action across channels ensures a focus on long term customer value and increased revenue over time. Productivity and maximized use of business assets physical and people. Decision Management Systems handle large numbers of routine decisions, freeing up people to focus on more complex, higher value tasks. They can also assign people to those tasks most likely to see a return on that investment, such as assigning collections activities based on the likely total return. These same approaches can maximize the use made of limited assets, deciding how best to handle them at each moment. Faster time to market and shorter time to respond. In many of these different areas the time it takes an organization to get to market with something or to respond to a change is critical. By making it easier and quicker to add new business rules, Decision Management Systems can improve time to market and so add additional value. This can be particularly effective as part of a legacy modernization effort where hard to change components are upgraded to Decision Management Systems for increased agility and lower management costs. Finally, when calculating the costs of a Decision Management System, it should be noted that these technologies often result in cheaper development relative to the equivalent traditional coding approaches. Decision Management Systems are also often dramatically cheaper to maintain. This both reduces the total cost of ownership of the system when maintenance costs are included and makes it more likely that any given improvement in the system (to match a competitors move or to take advantage of a fleeting opportunity) will actually be attempted. Bibliography Fish, Alan. Knowledge Automation: How to Implement Decision Management in Business Processes. New York, NY: John Wiley amp Sons, Inc, 2012. Fisher, Ronald A. The Design of Experiments, 9th Edition. Macmillan, 1971. Forgy, Charles. On the efficient implementation of production systems . Ph. D. Thesis, Carnegie-Mellon University, 1979. Nisbet, Robert, John Elder, and Gary Miner. Handbook of Statistical Analysis and Data Mining Applications . Burlington, MA: Elsevier, 2009. Taylor, James. Decision Management Systems: A Practical Guide to Using Business Rules and Predictive Analytics . New York, NY: IBM Press, 2012. Taylor, James, and Neil Raden. Smart (Enough) Systems: How to deliver competitive advantage by automating hidden decisions . New York, NY: Prentice Hall, 2007. This report can be freely circulated, printed and reproduced in its entirety provided no edits are made to it. If you would like to publish an extract, please contact Decision Management Solutions at infodecisionmanagementsolutions. Quotes from this report should be correctly attributed and identified as 2016, Decision Management Solutions. While every care has been taken to validate the information in this report, Decision Management Solutions accepts no liability for the content of this report, or for the consequences of any actions taken on the basis of the information provided.

Comments

Popular posts from this blog

Trading Strategi For Metastock

Algoritmer Innebygd prognoseekspert jobber med unik samling av forseggjorte prediktorer, tekniske indikatorer, digitale filtre og statistiske tester for å oppnå toppprognose nøyaktighet og pålitelighet av handelsrådgivning i full automatisert uovervåket modus. Ikke tilbudt i noe annet produkt. Direkte prognose, presis prisendring, presis terskelgrense, presis prognoseendring, sikker utgang, sikkerhetsbalanse ARIMA med ekspertmodellpass, Finite State Markov Automation. Finitivt impulsrespons-neuralt nettverk. Multivariate trinnvis regresjon, lineær regresjon, eksponentiell passform, logaritmisk passform, logistisk passform, kvadratisk passform, firkantet rotfiks asymmetri. Sammenlign med Naive Predictor, Correlation Radius. Kumulativ gevinst, Overflødig, Fractal Dimensjon. Friedman H-parameter, Hurst-koeffisient. K-Sample, Kruskal-Wallis. Netto fortjeneste, Normal absolutt forskjell, Normal røde middelkant, Forutsatt endringsretning, Skala av tegnendringsfrekvens, Shannon Sannsynlighet.

Stock Alternativer Kapital Gevinster Skatt Rente

Få mest mulig ut av ansatteopsjonsopsjoner En ansattaksjonsopsjonsplan kan være et lukrativt investeringsinstrument hvis det er riktig administrert. Av denne grunn har disse planene lenge tjent som et vellykket verktøy for å tiltrekke toppledere, og de siste årene har blitt et populært middel for å lokke ikke-ledende ansatte. Dessverre unnlater noen fortsatt å dra full nytte av pengene generert av deres ansattebeholdning. Forstå arten av aksjeopsjoner. beskatning og virkningen på personlig inntekt er nøkkelen til å maksimere en slik potensielt lukrativ fordel. Hva er en ansattaksjonsopsjon En ansattopsjonsopsjon er en kontrakt utstedt av en arbeidsgiver til en ansatt for å kjøpe et bestemt antall aksjer i selskapsbeholdningen til en fast pris for en begrenset periode. Det er to brede klassifiseringer av opsjoner utstedt: Ikke-kvalifiserte aksjeopsjoner (NSO) og incentivaksjoner (ISO). Ikke-kvalifiserte aksjeopsjoner adskiller seg fra opsjonsopsjoner på to måter. For det første tilbys N

Innbytte Binær Options Demo Konto

Slik handler du binære alternativer Lønnsomt Noen mennesker har problemer med å få deres HTTBOPMarketPanel som viser riktig tid etter at sommertidens endring har blitt gjort. Du trenger bare å sette opp dette riktig en gang, og da vil det gjøre endringer i seg selv for eventuelle endringer i sommertid i fremtiden. Broker tid kreves av koden som inneholder PivotsTz og vLines. Når datafeed er live, er meglertiden tilgjengelig. Når datafeed ikke er live, er det ikke tilgjengelig. Vi må finne og skrive inn klokkeplasseringen som har samme tid som megler-serveren. Da, selv uten live data feed, vil vi fortsatt ha tilsvarende tid for meglerens server til å jobbe med, generert av klokken, og PivotsTz og vLines vil alltid vise riktig. Utfør følgende trinn. Når det er live data feed, bruk klokken for å finne klokke tidssonen plassering som samsvarer med megler server tid. Endre ClockNormalvsFindServer til false. Klokken vil vise B (for megler-server) på tidssoneplasseringen som samsvarer med meg