Tuesday, October 27, 2015

The Limitations of Scientific Experiments



            Two articles I have recently read seem to demonstrate that the scientific method is not always as reliable as we might have hoped.  Evan Horowitz discusses in his essay “Studies show many studies are false” for the Boston Globe (7/1/14) how there have been studies of scientific studies that show that most of the scientific studies under consideration can’t be replicated.  When a German pharmaceutical company tried to replicate the results of 67 published studies from academia, they were only able to do so in one quarter of the cases.  The American company Amgen tried to recreate 53 cancer studies and only got results that matched those of the original studies in 6 cases.  The results of these studies of studies are significant, because if a scientific study cannot be replicated by different investigators, then it cannot be considered to have generated conclusions that are true and accurate.  Ian Sample, the Science editor for the Guardian, wrote in his article “Study delivers bleak verdict on validity of psychology experiment results” (8/29/15) that a recent investigator has demonstrated that an attempt to replicate the results of 100 experiments written up in major psychological journals has resulted in success only 36% of the time.  More precisely, only 50% of the studies in cognitive psychology could be replicated and only a mere 25% of the studies in social psychology could be replicated.  In addition, with respect to these psychology articles, when there was success in replicating results, the average effects of the replicated results were only half as large as the first time the experiments were performed.

            None of this is very encouraging for those of us who have looked at the scientific method as the foundation for much of modern knowledge.  All kinds of explanations are given for these discrepancies.  Evan Horowitz quotes a research professor named John Ioannidis, who had himself written an article “Why Most Published Research Findings Are False” (8/29/05) in which he enumerates academic pressure, the fact that the samples in studies can be too small, and a preference for surprising experiment results as reasons for the publication of non-replicable studies.  Horowitz himself adds an unconscious selection of certain kinds of subjects for the sample and the problematic nature of early studies on a particular experimental issue.  Ian Sample points out that scientists can change the methodology in a subtle way when they repeat an experiment.  Or some of the conditions in which the experiment is carried out could be distinct from the first performance of the experiment.  Or some chance element could affect the attempt to repeat the original result.  Finally, the original experiment could have been in some way defective leading to a false positive.

            Some of these reasons hint at a line of thought I have been developing throughout my articles with regard to the way humans grasp truth and grasp reality.  Reality includes a human’s entire field of experience and includes the whole configuration of stimuli that he receives from that field of experience.  There are three kinds of stimuli in a field of experience.  There are the focused measurable defined discrete stimuli that define the boundaries of things and ideas.  There are the blurry unmeasurable flowing blendable continual stimuli that have no defined beginning and no defined end.  Like the tide of a lake or an ocean or the shifting shapes in a lava lamp.  Finally there are the endless infinity stimuli found in total darkness and total silence.  The black in total darkness and the slight hum in total silence.

Scientific truth deals only with the first of these three categories.  Science looks to discover in order to control, to manipulate and to predict within the human field of experience and thus to improve the human living condition and protect people from harm.  To do this, it needs to be able to work with controllable, manipulable and predictable phenomena.  Phenomena that it can focus on and measure and dominate.  In other words, science needs to be able to work with phenomena that, at least to a great extent, are defined discrete figures that emit defined discrete stimuli.  These are the phenomena that allow scientists to set up experiments where they can trigger certain processes with the hope of arriving at certain results.  If the scientists succeed, they can then try to apply what they learn to the physical environment, the chemical environment, the biological environment, the psychological environment or the social environment and thus generate certain improvements in the human living condition or certain protections against harm.

            But the fact is that the results of a scientific experiment represent the excision of certain elements of the total reality surrounding the experiment.  Those elements that lead to the hoped-for pliable maneuvering of at least part of the environment.  But reality is constantly shifting, even the reality of an environment that is supposed to be as sensorily controlled as a laboratory.  As much as scientists try to make laboratories as sensorily neutral as possible, each laboratory is going to have some elements that make it different from the previous lab where an experiment was performed.  The layout, the lab furniture, the lighting, the air flow, the odors – all can have an influence on the performance of the experiment.  The time of day the experiment was performed, the time of year.  And, of course, different scientists and technicians can unconsciously have their own unique influence on the result of the experiment.  No two people are alike.  The only way to get rid of the human element’s influence when performing an experiment is to somehow find a way to get rid of the human element.  Perhaps one could perform identical experiments using identical robots and that would minimize the effect of different personalities on all the different aspects of the experiment.  But the intromission of a robot, in order to create as sterile and neutral an aspect in the behavior of the conductor of the experiment as possible, could itself have an unforeseen effect on an experiment and create its own distortion in the results.  This would be particularly true in psychological experiments where the experiment relates to the impact of a total experimental presentation on human subjects.

            There are just too many elements that can’t be controlled that can impact the results of even the best constructed experiment.  These elements relate to how the people and things in the experiment are grounded in their setting and to the unmeasurable flowing blendable continual stimuli that are constantly appearing in the most isolating laboratory environments.

            So what are we to make of scientific studies given this experiential wrench that is constantly being thrown into the experimental situation.  Perhaps, first and foremost, we should understand that as seemingly neutral as science is in building a body of knowledge, it does have an ulterior motive: to help humans gain some kind of control over their living environment.  This has to be done by gaining control over those aspects of their living environment that are most amenable to control.  And this means shutting out those aspects of the living environment that are not so amenable to control.  But by definition, those aspects that are not so amenable to control are not going to be so easily subject to the control of humans trying to shut them out.  And this means that as hard as humans may try, their experiments, as, one may say, their lives in general, are always going to be impinged upon by unforeseen influences.

            In particular, experiments and lives are always going to be impinged upon by flowing blendable continual stimuli.  Modern technological humans are always going to focus in both their experiments and their lives on the defined discrete stimuli that give humans the illusion that they will someday be able to effectively control the totality of their living environment.  Scientific experiments, even with their distortions, can contribute to results that sometimes lead to a partial control over a particular aspect of the environment.  We humans should be grateful for this, even as we acknowledge that our scientific truths will never allow us to gain domination over the totality of human reality.

The topic for this article was suggested by Dr. Jorge Cappon.

© 2015 Laurence Mesirow

The Algorithm Way Of Life



            Much of history has been involved in learning how to focus enough with our eyes and our mind to see things as highly defined discreet entities that can be managed and manipulated through a series of defined discrete steps to achieve certain ends.  As we have become better at focusing, we have become better at managing and manipulating.  The vehicle through which we have been increasingly translating strong focus into more strongly defined ends is algorithms.  An algorithm is a procedure that is mentally constructed to follow a series of well outlined steps in order to achieve a certain goal.  Although the concept was first developed by an Arab mathematician Muhammad ibn Musa al-Khwarizmi in the ninth century AD, it is in the modern world that it has found an application in so many areas of daily life.  Algorithms are used for computer applications, applications for industrial machines, and business strategies including even hiring new employees.  These are just some of the formal applications of algorithms where people are conscious of setting up defined formats for daily operations in which numbers and data play a significant role.

            But the visual and mental states that are adopted in order to perceive the phenomena in one’s field of experience as highly focused entities are visual and mental states that cannot be turned on and off so easily.  These hyper-focused mental states start bleeding into many different areas of daily life, many of which areas one would not have previously associated with algorithmic thinking.  Linguistically, these mental states tend to experience words only in terms of their literal denotative meanings and not in terms of their more figurative connotative meanings.  This is why many people today feel that they can rely on computers to translate from one language to another.  It is because they look at language as simply a code that is free from the cultural and psychological symbols that allow words and the meanings they connote to bond deeply together and form larger cultural meanings for the people who speak and listen to the words.  The truth is that a language is much more than a code.  In a code, words have a one to one correspondence to their meanings.  In a language, these denotative meanings are like the tip of an iceberg filled with connotative meanings.  Without the presence of connotative meanings,  words become isolated figures that are strung together into sentences and that float in an experiential vacuum.  Advanced robots speak using linguistic algorithms.  And so do hyperfocused humans.  Linguistic algorithms not only prevent words from deeply bonding with one another through symbolic connotations, they prevent people from properly using language as a vehicle to bond deeply with one another.  Linguistic algorithms and hyperfocused mental states impede the proper development of intimate connection.  Using linguistic algorithms, people bond in a more shallow way, in a way that is exclusively for temporary contingent purposes.

            But it is not only language that is ruled by algorithms today.  Many people’s whole lives are ruled by algorithms.  Middle-class parents develop a plan with defined discrete steps for their children in which children get sent to the proper pre-school in order to get accepted by the proper grammar school and do well academically there, and this allows them to get accepted by the proper high school and do well academically there which allows them to go the proper university which leads to the appropriate graduate school which ultimately leads to a successful career.  While growing up, these algorithm-ruled young people are given special outside enrichment classes in additional academic subjects as well as art, music, and sports.  Almost every moment is precisely slotted.  Every encounter with the world is geared to be one more mini-step up the ladder to success.  Unlike with previous generations, there is very little time for free play either with others or by oneself.  There is very little time to just do nothing and engage in day-dreaming.  Free play and daydreaming are considered a waste of time and not productive.

            So the child grows up with a mind that has always been focusing on something and that has developed a lot of defined discrete compartmentalized skills.  But the child has never had the opportunity to develop an organic coherent sense of self with the capacity for deep-bonded intimacy with other human beings.  The algorithm of his childhood plan of development has basically turned him into a robot.

            Thinking constantly in algorithmic terms has effects in several different areas of life as the child grows up.  Because the child lacks an organic coherent sense of self, he also is incapable of recognizing other people in terms of their organic coherent senses of self.  Instead, he sees other people in terms of the same kind of overly-focused compartmentalized defined discrete functions that he perceives in himself.  He becomes incapable of deep-bonding with another person, communing with another person’s whole sense of self.  Instead, he shallow-bonds with the person, connecting with the person in order to utilize specific functions that the person has to offer.  This approach to relationships makes real intimacy difficult if not impossible.

            And this has profound effects on the development of stable families and stable communities.  No wonder divorce has become so common and so many families are fragmenting.  And lack of emotional grounding in stable family relationships contributes to major problems of emotional health.  

            Another problem area is that of creativity.  People who grow up with a fragmented sense of self, who lack a sense of their own internal connectedness, lack a capacity for seeing and creating symbolic connections between different parts of human experience in the world.  And yet symbolic connections are the foundation of much of what we call great works in the arts.  It is these meaningful symbolic connections that leave deep organic imprints on people’s minds.

            Instead, many artists today project their internal fragmentation out on the world by creating works with fragmented thoughts (post-modern poetry) fragmented images (much of contemporary art) and fragmented melodies (modern atonal classical music).  In these cases, it is as if the algorithms that modern humans use to hold their lives together completely fail.  Algorithms are just no substitute for organic grounding and organic bonds.

            Of course, some people today do make an effort to create super-coherent images in the external world.  But they are images that float in a vacuum without symbolic connections and symbolic meanings related to other phenomena.  These are the hyperrealist photo-like paintings and hyperrealist novels with a morbid focus on the dark sides of life: poverty, criminality and disease.  These creative works don’t develop some larger symbolic message or comments, but instead hyperfocus the writer on reality, hyperfocus the audience on reality, shock people into temporarily pulling together and pulling their view of the world together.

            The excessive use of algorithms and algorithmic thinking turns people into robots.  Algorithmic states of mind prevent people from developing more coherent flowing blendable continual methods of thinking like instinct and intuition, where a situation being thought about is grasped as an organic whole.  Algorithmic states of mind prevent people from absorbing the flowing blendable continual stimuli that are an essential part of rich vibrant life experiences and that are the foundation of being able to make, receive and preserve organic imprints.  Algorithmic states of mind prevent the development of the organic coherent senses of self that give people the reflexive awareness necessary to think in terms of using their preserved organic imprints as a foundation for creating a surrogate immortality and preparing for death.  Algorithms certainly are useful in some situations, but they have real limitations as an encompassing framework for the human way of life.

Fighting Battles With Robots



            One indication that modern technological society could be viewed as trying to move in a more civilized direction is in the gradual introduction of fighting robots to do our recreational fighting for us.  In an article in Live Science “Giant ‘Battle Bot’ Could Get Makeover Ahead of Epic Duel”, we learn that MegaBots, Inc., a company based in Boston that focuses on building fighting robots, just started a Kickstarter campaign to raise the funds to build a new improved robot that would fight its counterpart from a Japanese company.  Now we no longer have to live with the side effects of our favorite recreational fighter getting a broken nose, a concussion or a busted rib cage.  A robot is built of parts that are assembled into a machine entity.  Damage or destroy some of the parts in a fight, and they can be replaced.  It is much easier to replace a machine part than it is an organic part.  Hippies can proclaim a new mantra:  “Let the robots make war and we’ll make love.”
            But the question is if fighting robots can truly replace human fighters in terms of the psychological needs of human spectators.  After all, robots don’t have any skin in the game, both literally and figuratively.  A robot does not have the kind of coherent organic self that allows itself to experience a threat to its very existence the way a human would.  A fighting robot is programmed to attack and to defend itself, but its fighting is based on programming rather than on an awareness of an existential threat to its mortality.  A robot does not have reflexive awareness; it does not have flowing continual consciousness.  It does not experience fear that it is going to get hurt or that it is going to die.
            A robot does not experience a rush of adrenaline, as it goes from a calmer state living a daily routine life to a survival mode.  A fighting robot goes from an off mode to activation for the only thing for which it exists.  Now granted the program is sophisticated enough that the robot has to operate independent of the ongoing control and manipulation by a human.  It certainly operates more independently than a drone that is guided to a target and that fires missiles at it.  But the Battle Bot is still different enough from a human, that it would generate little real strong identification from a normal human.  Unless, of course, the human has become so robotized from all the mirroring and modeling it has experienced from computers, complex machines, and other robots.  So here is a frightening truth about these spectator conflicts between fighting robots.  Their popularity is based on the fact that human spectators have become sufficiently robotized that they can identify with robots. They can obtain vicarious satisfaction out of seeing their robot damage and destroy another robot as if something apart from a complex piece of machinery was being affected.
            Traditionally combat was a meaningful way for men to leave organic imprints, even though negative and destructive, on their field of experience.  The spectator, in identifying with a victorious combatant, would participate on a collective level in the organic imprint being left in the victory.  The victory would be part of the collective memory of all the observers of the combat, and all the people who heard the news from the observers.  If a robot’s victory over another robot can generate a similar kind of impact over some people, then for those people, the boundaries between human and robot have truly been dramatically blurred.
            Fighting robots have been and are being considered for actual warfare, and this raises a whole new bundle of concerns.  Are the rules of warfare going to be changed such that robots will fight robots in order to resolve disputes, and whichever army of robots wins the war will determine which military group gets its way.  This is dreaming.  Robots are increasingly going to be an instrument and a weapon to fight human combatants.  In other words, we are increasingly going to have complex machines that are going to be able to choose targets on their own.  This is, of course, very different from a drone, which is constantly being guided by human operators.
            Were we to see robots battling against other robots, we would perceive a situation of conflict in which no organic imprints are being left and in which no hurt or pain to organisms is actually being experienced or perceived.  We would see a situation in which no humans are being injured or hurt, in which no humans are being put in pain or discomfort.  If this situation does exist, there would simultaneously be no satisfaction of having made and preserved significant organic imprints by causing pain and death to enemies, and at the same time, no recognition of the enormity of war and therefore no experiences to cause a country or militant group to reconsider the next time it considers getting into war.  In other words, war without participation of humans is less costly to humans, and therefore, is less likely to provide meaningful resolution and closure.
            Robotic warfare that is directed at humans obviously creates pain, injury and death for the human victims.  However, whatever organic imprints that exist in the situation are attenuated.  One could say that the person or persons who activate fighting robots are creating some sort of imprint by setting the robots in motion.  But a true organic imprint in fighting involves a mixture of defined discrete motion – involved with the overall direction of the aggressive actions to subdue the enemy – and flowing blendable continual motion – the constantly adjusted moves that have to be made to deal with the shifting target of the enemy.  In fighting between humans, human aggressors are involved in the use and experience of both of these kinds of motion.  As a result, they are leaving and receiving organic imprints.

            When one pushes a button to activate a fighting robot, one is simply giving off one defined discrete stimulus.  This is a highly attenuated organic imprint.  Granted that there is the organic imprint of creating one general plan of attack, but this is more attenuated, because the planner is not directly connected not only with the immediate experience of fighting, but even in the immediate experience that comes from planning battles where his human soldiers’ lives are at risk.

            It is only by leaving the negative organic imprint that comes from participating with human agents in warfare, that one can truly feel one’s participation in events that lead to human pain, suffering and misery.  Pushing buttons for fighting robots, leads to a much more attenuated chain of responsibility.  Pushing buttons for fighting robots is ultimately a numbing experience that doesn’t allow a person to fully experience the intensity of the negative organic imprints that come from killing in war and, therefore, does not convince people as easily of the horrors of war.  Without organic imprints through a more immediate participation, the button-pushers cannot as easily learn the lesson of just how horrible war is.

            So the increasing use of robots in warfare may protect humans from taking on the more active role of combatants, but without that role, there will be less incentive to turn away from the destruction of warfare as a vehicle for resolving conflicts and disputes.  Paradoxically, the use of fighting robots in warfare may prolong wars and lead to more destruction, as the button-pushers blur with the robots they activate to become mechanical conflict generators, to become, in a way, fighting robots themselves.

(c) 2015 Laurence Mesirow