A sign posted at the entrance to rough back country, saying ‘this is your decision point’
Photo by Joshua Sukoff on Unsplash

AI Designs Decisions

Dissection of survey evidence on AI-powered decision-making

Ian Domowitz
Towards Data Science
6 min readJun 12, 2021

--

Havelock Ellis said it is not the attainment of the goal that matters, it is the things met with by the way. He was speaking of philosophy. In business AI is all about goal attainment. The things met along the way are decisions.

Decisions constitute a focus of the recent survey by Signal AI of 1,000 C-suite executives in an attempt to estimate the impact of AI on the U.S. economy. According to the survey, 96 percent of business leaders believe AI will transform decision making and 92 percent agree companies should leverage AI to augment decision-making processes.

AI is not so sure.

Most decisions are not binary

Neither survey nor business directors are informative with respect to the types of decisions involved. Most respondents say they spend upwards of 40 hours a week on the process. No surprise: that is presumably why they are paid, but with 80 percent of leaders claiming there are too much data to evaluate, senior management is looking for relief. Where does AI fit in the picture?

AI aspires to set and achieve goals by motivating and guiding the organization through phases of decision making. Four kinds of decisions are relevant.

Policy decisions involve choosing what goals to pursue and how they will be attained. Proper adaptation of the technology to the company ought to define these objectives. AI risks failure at this step by falling in love with creative fire and failing to recognize practical guidelines.

Goals should be few in number but need not be terribly specific. Remaining possibilities are jettisoned. Diffusion of purpose is a business risk. AI is good at ill-posed problems but is known for wandering off until it believes a problem is solved.

Any goal must be defined in terms of a problem. Good design dictates goal determination should lead to an understanding of the problem. AI may not be good at understanding, but attainment of goals is elucidated as a set of solutions. Like any good corporate story, the narrative cannot be something as generic as raising additional revenue. An element of transformation must be involved.

AI embodies transformation but is not yet ready for policy decisions. AI must first be ready to define problems themselves. This is an achievable ambition, however. AI knows how to identify poor components or bad behavior, for example, leading to identification of symptoms in turn suggesting problems.

Allocative decisions follow. Goal attainment entails apportionment of resources and responsibilities among personnel. There are positions with roles to play in helping the organization fulfill its goals. Each position has a specific function in attempting to make profits for the venture.

A position must solve a problem. When in doubt, go back to goal setting as a policy decision.

AI can assign value to internal resources and smartly allot types and quantities to a project. AI is thinking about taking over the responsibility of project management, which also fits into the goal attainment function. Corporate life should only be so simple. The definition of its own role and problems of human role assignment are beyond current technology. Nevertheless, a few human resources officers such as Diane Gherson at IBM are training AI in the art loosely based on the model of generative adversarial networks.

Role conflict is a common result of mindless org charts, a very human invention. The exposure of AI to conflicting sets of role expectations ensures problems arise having nothing to do with the company’s goals. The trope of Hal in 2001: A Space Odyssey occupies mainstream consciousness and is representative of the problem despite the science fiction reference. Compromise is a matter of negotiation not org chart templates. AI otherwise is exposed to negative sanctions and internal conflict. Hal killed (almost) everyone.

AI adopts multiple roles and can order their allocation as well as those of others within the company. Relations to others are governed by interests and orientations meshing with those of AI in different ways. These differences adjust through an allocation of the claims to which AI is subject. The ordering occurs by priorities, by context, and by distribution of gains. Some activities have appropriate partners. Others would not be a good fit with available partners, time and space.

The allocative ordering of AI’s role system is delicately balanced. Any major alteration in one part may encroach on others and necessitate a whole series of adjustments. Human rebellion otherwise follows. Fragility is a poor attribute for a decision process.

Coordinative decisions comprise how personnel are motivated and how contributions are regulated. Compensation dominates the discussion within an internal business plan. AI is not able to tempt employees with jumps in pay and relative position within a larger organization. People must buy into the concept and accept it in such a way as to willingly accept their roles as levers in the machine.

AI should provide a sense of purpose. Motivation follows.

AI needs to think about this part of goal attainment. Society is concerned with human job loss, and economists anticipate retraining into professions under the AI umbrella. Retraining worked for the First Industrial Revolution, but the historical record illustrates a great deal of pain and suffering involved in the short term.

Decisions are easier when values are in place

AI has its own question to ask here: do the humans reorganize decision-making for AI or does AI organize the process for the company? The answer is not obvious but the introduction of values provides guidance.

Supporting values are those serving to legitimize decision-making rights. The definition and mode of communication of those values constitute the fourth set of decisions. Including values within goal setting is an opportunity not to be missed.

Decision rights provide a means to birth the culture of AI without disturbing overall company culture.

Stability requires the interests of employees to be bound in conformity with a shared system of values. Reactions within the company to AI’s actions are structured as a function of allegiance to the system. Conformity as a means of goal fulfillment coincides with a condition of eliciting the favorable and avoiding unfavorable reactions of others.

Conformity with a value standard meets these criteria. From AI’s perspective, it is a mode of the fulfillment of its needs and a condition of optimizing reactions of decision-makers within the firm. A value pattern is institutionalized in a context of personnel interaction.

Institutionalization of AI’s role in decision making is a matter of company expectations.

Role expectations set standards for the behavior of AI. There also is a set of expectations relative to the reactions of others. The latter are sanctions, which in turn may be positive or negative. The difference to AI is whether they promote gratification or are depriving of action. The relation between role expectations and sanctions is reciprocal. Sanctions to AI are role expectations to the company and vice versa.

AI’s decision-making role is organized around expectations integrated with a set of values. The same values govern interaction with those in complementary roles to AI. The institutionalization of role expectations and of corresponding sanctions is a matter of degree. The antithesis of institutionalization is the complete breakdown of normative order. This cannot happen.

The survey author has a takeaway

AI achieves success when steering the organization through phases of decision making. This should not be confused with success through the making of decisions nor with AI-assisted augmentation of details in the process.

The statement is hope in one part and trepidation in another. Hope is the message from the survey, but Signal AI CEO David Benigson notes that business leaders tend to have unrealistic expectations. “Just like with other technologies, they are overestimating the impact of AI in the short term and underestimating it in the longer term.”

The four criteria of decision making illustrate the difficulties involved and within-firm expectations figure prominently in the painting. In that sense, Benigson is correct. AI is not like other technologies, however; it is the first to present an existential challenge to the workforce. Senior management is included in that challenge as AI turns to decision making.

The workforce collectively underestimates the impact of AI in the short term by moving to cancel or delay projects based on fear of techlash and regulatory risk. It overestimates effects in the long term by making the concept of general AI a central feature of the existential debate.

AI is just a baby and we don’t trust babies with decisions. They break things met along the way.

--

--

Ian Domowitz currently serves on the Board of Directors, McKinley Management and can be found at IanDomowitz.com