Tuesday, 28 August 2018

Paying to have your exercise taken

Going Nowhere

Paying to have your exercise taken

Turning Service Management into a Cargo Cult

The case for Service Governance & VeriSM


I enjoyed a most excellent lunch this past weekend, during which I was chatting to a senior manager in a retail organisation. I was struck by her comments on the 'Service Management people'. I thought her words summed the problem up well. To paraphrase them:

"
I'm not sure what the point of the service management people is. When they come to see us, they either want to tell us what to do, or they want us to take lots of measurements of metrics that don't seem to make much sense. The app development people seem to understand things much better, they talk to the business, and understand what they want.

I have known this organisation, on an off, for a couple of decades, so this wasn't a surprise to me.  I even taught a tailored ITIL foundations course to a team from the organisation a few years back, to help out a friend who was consulting to them, but didn't have ITIL qualification. I was, then, disappointed that they wanted to do everything on the cheap, and those attending were mainly very junior, and inexperienced employees, all in IT.
As far as I can see, the organisation should call it a day, and close down their service management section. It is a tribute to the people in that organisation that they have survived so long, as the organisation has a habit of carrying out Stalinist purges, reorganisations, every two, or three, years, carrying these out with sadistic secrecy and slowness, so the whole organisation is paralysed, for months, with everybody gossiping about the cuts and hoping the axe will fall on another person or department.

More than that, nobody should try using service management there for a long time. The whole idea has been poisoned, so it seem that, rather than poor execution, it is the thing itself that is no good.

To have worked so hard, for so long, surviving these purges, should count for something, at least for the people themselves. However, the result of the decade (or more) of effort is nothing. They are a cargo cult, going through the motions, as if the organisation had adopted service management, when, in fact, as my conversation demonstrates, the organisation may, as with many, use service management techniques, but has no understanding of service management at all.

They are like a person who pays somebody else to do their exercise for them. No matter how good the exerciser, and no matter how hard he works, the benefit is not going to accrue to the person who pays for it, but does no exercise himself.

The reason for the failure is simple: Trying to do Service Management bottom up does not work. It is deeply frustrating, difficult, and futile.

Service management is not a useful end in itself. It is only useful as a tool to help organisations produce value, it might be useful to have a group looking after some of the specifics, but service management is not carried out by one little team, it is carried out by the whole organisation, or not at all.

Unless the governing body of an organisation recognises what service management brings to the business, and decides to adopt it across the organisation, it is usually better not to try introducing it. Yes, you pilot a part of service management to make a business case to the board, but not more than that.

Service Governance, and VeriSM, recognise this, and are aimed at governing organisations through the service metaphor. They gain traction by using governance to set the policy for management restructuring of the positive sort, aimed specifically at those things required to produce organisational value.


Thursday, 7 June 2018

Conjecture: Any set of rules rich enough to be useful can be gamed - The Internet of Things (IoT)

Rules are becoming very important. Robots rely on rules. Self-Driving cars will rely on rules. Already people have been killed because the rules have been inadequate to deal with quotidian reality. The Internet of Things (IoT) is busy working on all sorts of rule-based entities that will become part of the fabric of our lives.

A lot of work is being carried out in robotics, artificial intelligence, and other areas to deal with the problems that exist when you create rule-based systems that interact with the real world, and human beings.

Some work has been done to deal with the problems of inconsistent programming logic. Ada was designed to allow formal proof, or verification, that programs written in Ada do what they are intended to do.

The problem, though, is deeper than that. There are some mathematical and programming theories that have some bearing on rule-based systems, and give some insight into how they will behave. Game Theory allows some conclusions to be made about different agents making choices. Queuing theory allows some conclusions to be drawn on how long it might take for decisions to be made, or services to be delivered. There's Prolog, a language designed to work with propositions, that allows programming at a logical level to be tested.

What there does not seem to be is a stand alone theory of rule-based systems.

As human beings, we are familiar with working with rules. We know that even very carefully written rules, such as legal statutes, are open to interpretation and can be 'gamed'.

'Gaming' rules means, in essence, taking a set of rules that are intended to fulfil one set of objectives, and finding a behaviour, or set of behaviours, that obey the rules, but accomplish a quite different set of objectives, often a contradictory set.

As a simple example, the rule might be that a help desk person (I'll not say 'agent' because that may be confused with robotic agents) must minimise the time spent on calls. The objective of this rule is that the organisation will service as many callers as possible as well as possible. It soon becomes apparent that, if a caller has a complex requirement that will take some time, it is possible to make it appear to fit within the rules by closing the call when it gets near the required maximum time, and opening a new call. This is inconvenient for the caller, and gives the organisation a distorted picture of how long calls are actually taken, but allows the help desk person to comply with the rules.

It is quite difficult to put this into a formal language and show how the rule is being gamed.

So, in this article, I am proposing a conjecture:

"
Any set of rules rich enough to be useful can be gamed.
"

I believe it to be true, but realise that it cannot be proven, or demonstrated to be true. What we need is some sort of formal system that  will:

1. Define a 'set of rules'
2. Define 'rich enough'
3. Define 'useful'
4. Define 'gamed'
5. Allow theorems to be produced, so the above conjecture can  be stated formally
6. All the above clearly require the use of fuzzy logic, as well, perhaps, as modal logic and, perhaps, dialetheism. So these would provide a basic set of tools on which the new rule-language would be based.
7. The aim would be that any rules so produced would be strictly provable, as Ada programs are provable.

All that is far too much work simply to confirm, or disprove, my conjecture. However, if such a formal system did exist, it would be extremely useful in defining rules that satisfy real world requirements - such as human safety - in a formal manner that can then be translated into a working Ada or Prolog program that can then be used to operate a self-driving car, or IoT device, with a high degree of certainty that the rules will operate as required.

There are a number of existing notation systems, such as graph theory, Business Process Modeling Language (BPML), Ada, Prolog, and various ontology languages, such as OWL, that could, usefully, be used to work on this problem.

I think that it would be useful to get funding for a competition to answer this conjecture. That would provide an incentive for mathematicians, logicians, process engineers, robotics experts and others to take part. The competition would provide a loose framework for the above, and require those taking part to show how it could be tightened to be unambiguous and strong enough to answer the hypothesis.

Then the rule-sets so developed could be tested against real-world problem. For example, it could be a self-driving car that has a universal top-level rule (rules would need to have defined scope), that it must not hit people. The rule set could then be tested against: Someone kicking the car (would it count that as a 'fail'?), a person landing on a car from a hang-glider or paraglider, a cyclist, a pedestrian with metal crutches (if a sensor recognises metal as non-human), a wheelchair, a skateboarding person, and a gorilla. Also, of course, all these with different terrain and lighting conditions. The question isn't really about how good the sensors are, but how the rules interpret the results to ensure that any of these edge-cases (gaming the system, in a sense, even if not intentionally), do not break the cardinal rules... Of course, this is just an example, many more test cases could be established, and, for the competition, something like a turtle world would do, because complex sensors aren't part of the puzzle - just the integrating of the fuzzy logic from whatever sensors there are, into high and low level rules consistently and, in the formal sense above, 'usefully'.

Finally, an example to illustrate what 'useful' and 'gamed' might mean. The rules of chess can be written down quite simply, but don't enable you to produce a chess robot on their own, they are not rich enough, so not 'useful' in the automation sense. Early chess playing programs could be gamed quite easily. One method was to move the king forward, ideally to the other side of the board. This made the chess program behave erratically and made it much easier to beat. The reason being that it made decisions based partly on giving each square a static positional value, moving the king to where it was not expected to be upended this value, so, when moves against the king were evaluated, they did not give appropriate values. A more sophisticated set of rules would give values to squares based on the position of the king(s), not statically - as a human player would update tactics if the king moved in a similar way. The question the conjecture poses is whether such improvements to the rule-set can ever prevent such gaming of the rules.

If you are reading this, and are aware of work being done in this specific field, please get in touch, or leave a comment to this article.

If you would be interested in contributing to research in this area, likewise.

Hashtags:

#Rule #Game #Theory #Logic #AI #Maths #Phlosophy #GameTheory #RuleTheory #Robotics #Automation #Psychology #Behaviour #Perversity #Conjecture #knowledge #rules #learning #gaming #API #IoT #DeepLearning #DigitalMaking #DataScience #DigitalTransformation #Infosec #CyberSecurity #Ada #BPML #Prolog #Owl #Ontology #GraphTheory #Graphs #Machine Learning #Knowledge Management #Governance #Service Governance #Safety #Health&Safety #Robot #Self-Driving #Car #FuzzyLogic