Sunday, 24 February 2008

Lies, drinking, damn lies, crime and statistics

Britain recently passed a licensing act which essentially allowed drinking establishments to remain open as long as they liked. While this is an oversimplification, it serves the purpose; essentially this has meant that all pubs no longer have to close at 23:00, nor all clubs at 02:00. There was great controversy in the media at the time, and fear that this would lead to a rise in binge-drinking and alcohol-related violence and other crime.

Little surprise, then, that there has been much talk in the last week of all their fears being proven true. Both the Telegraph and the Daily Mail have carried stories to the effect that the new 24-hour drinking laws have fuelled a rise in crime in the UK. This is one of the first things I was taught about in my politics undergraduate course: statistics in the media.

What the new drinking laws have actually led to is a rise in crime figures. This is because the police no longer have to deal with a flood of drunken people on the streets when every pub or club in the area kicks them out; they are no longer overwhelmed, and can actually catch and report a greater number of crimes. This "rise in crime" is in the statistics only, and was entirely predicted and intended by those who designed the law. A quote from a police officer (on Daily Mail Watch) supports this:

"The licensing act (24 hour) has also helped a great deal. Instead of kicking-out time for everywhere at 11pm, we’ve got slow dispersement into the night, so the police haven’t got a great mass of people all at once. Crime has ’shot up’ after the licensing Act because we CAN detect, arrest and deal with more people, rather than be swamped and therefore unable to arrest/detect any crime at all! This ‘crime-spike’ was intended by the Home Office and the police as a result of the above reason, but you won’t read that in the Daily Mail!"

Always be wary of statistics, particularly ones concerning crime.

[Raise of the tankard to Obsolete]

Friday, 22 February 2008

As if more were needed...

ResearchBlogging.orgIt seems that there is yet more support for the value of the "Socratic method". Not that any more is needed, of course - but it's always gratifying nonetheless.

A new study has been done which suggests that "people who engaged in social interaction displayed higher levels of cognitive performance" [via ScienceDaily]. So not only does the Socratic method allow for the clear and logical exchange and development of ideas, but it also reflects (and takes advantage of) the value inherent in social intercourse.

The paper itself argues something which struck me as possibly misleading. It does not regard the main result of the research, which was fully supported, but rather an observation that "our society appears to be in a state of social decline". This is certainly true in many respects - a reduction in membership of social and other organisations, for instance - and in some cases could prove worrisome, such as the research which indicates that people have fewer "close others" they can talk to about their innermost thoughts and feelings. However, I felt that there was a very important oversight in this passage - though perfectly forgivable as it was not within the ambit of the paper.

Basically, the definition of "social interaction" was a little narrow for my liking. It seemed to define it solely in terms of "face-to-face" interactions, even though part of the reason for the decline in these interactions is the now-widespread ability to interact socially while not in the same room. I would imagine that visiting friends and family began a shallow decline with the advent of the telephone; a decline which only steepened with the coming of the internet. However, especially in the last ten years, there has been an explosion in what might be called virtual interactions. Millions of people subscribe to social networking sites, fora, blogs, and recommendation networks such as Digg and del.icio.us. In some ways if not in others, we are a more socially connected global society than were just a few decades ago.

There is nothing (at least that I can think of) that would be missing in a long-distance interaction which would negate the apparent cognitively beneficial aspects of social discourse. Unless you want to propose the benefits of proximity to brainwaves from others, of course; but until I see respectable research on that, I'm going to assume it's baloney.

While consideration of this new level of social interaction is unlikely to impact upon the outcome of the research done in this paper, it adds another dimension to the issue. Of course, it's a complex enough issue as it is - needless to say, not all social interactions benefit cognition (it would be hard to believe if that were the case - people exchanging mindless dogmatic racial slurs are thinking more sharply because it's a social activity?); and yet I still embrace this news as reinforcement of the value of Socrates' most important contribution to the world.

[Ybarra, O., Burnstein, E., Winkielman, P., Keller, M.C., Manis, M., Chan, E., Rodriguez, J. (2007). Mental Exercising Through Simple Socializing: Social Interaction Promotes General Cognitive Functioning. Personality and Social Psychology Bulletin, 34(2), 248-259. DOI: 10.1177/0146167207310454]

Monday, 18 February 2008

A Cautionary Tale...

Aren't we all surrounded by people like Mr Pepperdyne? And isn't there a little Mr Pepperdyne in all of us?

Of course, if we all individually researched everything we'd never make any progress; the key is to strike the right balance of belief and doubt.

But by all means don't take my word for it.

Saturday, 16 February 2008

Singularity in 2029?

Unsurprisingly, given my vast interest in all things artificial intelligence, this news story leaped out at me from the BBC News Technology page:

Machines 'to match man by 2029'

The point at which machines reach the same level of intelligence of man is known as "singularity", and is something I've been hearing a lot of lately. It was mentioned in last week's Skeptic's Guide podcast, in the context of the crazy "Mayan Apocalypse 2012" topic; apparently one of the ways in which the world might end is through reaching this singularity. Which actually links to the second way it came to my attention - through the Terminator spin-off series The Sarah Connor Chronicles. In both of these incarnations, the singularity is a Very Bad Thing. See also the original Matrix premise regarding the apocalypse brought about by the war between humanity and A.I.

But of course it need not be. The article regarding the possible singularity in 2029 is, it should be stressed, just a prediction - albeit a prediction from a leading expert in the field. I see no reason why his prediction might be wide of the mark; the technology in this area is advancing at a tremendous rate of knots. Reverse-engineering the brain (presumably the human brain) has been identified as one of the 14 major technological challenges facing humanity in this still-young century.

Even if singularity is not reached by 2029, two things are clear to me: firstly, it is itself inevitable for as long as research is done and progress is made; secondly, there's going to be an awful lot of very cool stuff going on by that point - much of which is discussed in that BBC article. Nanobots in particular, helping with the improvement of our own intelligence, fighting disease, and enhancing virtual reality.

As far as I'm concerned, singularity is coming. I'm personally hoping that it arrives sooner rather than later, because that will be a truly interesting time to be alive.

Thursday, 14 February 2008

Just a quickie...

Saw this sticker in the back window of a car today, and just had to take a picture. It speaks simply but eloquently.

Monday, 11 February 2008

Blink 3: The Homeopathy Confusion

While there are so many things to say about homeopathy and the various reasons for which it is a perennial target for sceptics' rants, today I'll confine myself to just one: what it actually is. It seems there's some confusion on this score, which may well be contributing to the public's continued acceptance of it as a valid therapy.

It is not a herbal remedy.

This cannot be stated strongly enough, because it's a commonplace confusion and could hardly be farther from the truth. Homeopathy operates on two main premises - known as "like cures like" and "the law of infinitesimals". They take an agent known to produce the same symptoms as that which they are trying to cure, and they dilute it. Then they take a drop of that solution and dilute it again. The end result of this process which may be repeated many times is that the final solution is chemically recognised as pure water. In some cases, you would need a sample of that solution the same size as the galaxy to find but one molecule of the original agent. One molecule.

In answer to this, homeopaths came up with the idea that the water retains a spiritual memory of the agent, which is created by agitation of the solution at each stage.

Let's look at that again - a spiritual impression. This piece of information is not widely advertised by homeopaths; they are quite content for the general public to remain ignorant of its actual claims and go on thinking of homeopathy as something to do with herbal preparations. Rather than herbs, however, it relies on belief in the healing powers of spiritual impressions in water.

I have no problem with someone using homeopathic "remedies", in principle - as long as they are fully informed of the purported methods by which it operates, and the fact that they are buying into a spiritual belief system - not science, or herbology.

Thursday, 7 February 2008

Reframing the question

This semester, I'm going to be studying the philosophy of artificial intelligence, a field which is of great interest to me. To get me in the right frame of mind, I've been watching a fair bit of science fiction lately, particularly that dealing with the future of robotics/cybernetics/whatever. The one with the most philosophy behind it thus far has to be Bicentennial Man, the story of a robot with a "flaw" allowing him to be creative and develop his own character.

While it's hardly groundbreaking, it does give you a good idea of the sorts of issues that exist around the philosophy of artificial intelligence - the main one being, of course, when (if ever) is it a person? As soon as it demonstrates creativity? Or only when it becomes mortal?

Today I received a book I had ordered, called Imitation in Animals and Artifacts. Flicking through this, it occurred to me that a question I had never heard asked was, rather than "when is a robot to be considered a person?", "when is a robot to be considered equivalent to an animal?". Maybe this is a nonsense, or leads nowhere useful, but it's one I'd like to look into in more detail - if only to shed light on the personhood question. Perhaps I'll have the time to do so soon. For now, I'll just jot down some ideas:

If personhood is based on self-awareness, are there animals that we should consider to be persons? And how do we know when something is self-aware, if we have no way to explicitly communicate?

What would be the criteria for animalism?

Does the fact that AI is usually intended to simulate human intelligence mean that this is a pointless debate? Or is this the aim of AI because it has already reached or surpassed the intelligence level of animals?

Is a computer that can beat any human player at chess more intelligent than an animal who lives in a complex social system and adapts to its surroundings?

Maybe this will be my focus for my artificial intelligence module this coming semester.

Tuesday, 5 February 2008

Bad Sceptics

A problem with calling oneself a sceptic (or indeed a skeptic, if you're Americanistically-inclined) is that there is a widespread misconception of the meaning of this word. Or rather, to be more generous, there are various different meanings depending upon the context in which the word is used. In philosophy, for instance, a skeptic is one who doubts everything except the existence of their own mind as a doubting entity. While this is an oversimplification of the term (head over to the Stanford Encyclopedia of Philosophy's page on scepticism for a more in-depth discussion), it certainly demonstrates that there is a clear difference between a philosophical sceptic and a sceptic as might be referred to in this and similar blogs, for instance. The only really central theme throughout is that a sceptic is a person who doubts.

Thus, when one identifies oneself as a sceptic in everyday discussion with those who may not be engaged in the same communities, there is often a confusion. The listener perhaps takes this to mean that you doubt everything, or automatically dismiss anything that seems remotely unconventional. Believe it or not, there are those who see sceptics as a force against progression - continually holding back innovation in many fields. While this is true to some extent (insistence on proper scientific method does tend to slow down the process after all - it's such a drag having to be accurate all the damn time), it overlooks the fact that sceptics are usually at the cutting edge of innovation. Indeed, it is scepticism in science which allows it to admit when it is wrong, and adapt to new ideas - a process of constant self-improvement. It is the dogmatic approach opposed by scepticism that causes ideas to stagnate; and while proper process necessarily means that research must move slowly, it ensures that it moves accurately.

What in part prompted me to write this entry is this post over at Science-Based Medicine. While this is hardly my area of expertise, it is disturbing to find an organisation, which sets itself in opposition to widely-researched scientific fact, describing itself in terms of scepticism. They are not "cholesterol skeptics" just because they doubt the majority view on cholesterol any more than a UFO nut is a sceptic. A sceptic does not doubt in spite of the evidence - the evidence is a sceptic's tool, and a good sceptic always examines the evidence in as non-biased a way as she can before adopting a position. It will not help the profile of true sceptics to have these anti-science loons using the moniker as their own.

On an entirely unrelated note, if anyone asks you what the harm is in entertaining the occasionally-seductive claims of pseudoscience, you can now direct them to What's The Harm?, an ongoing collection of data concerning harm done through pseudoscience and woo. (hat-tip to Skepchick for this).

Monday, 4 February 2008

Good, ripe hypocrisy

As always, there are few places which will yield more juicy hypocrisy than the United States' government. The latest idiocy of theirs is to give into demands from the Christian Right (shocker) that the words "In God We Trust" are given greater prominence on commemorative presidential coins. Story here.

Overlooking (with some difficulty) how mind-numbingly trivial this issue is, can we please take a moment to remember one of the things that makes the constitution of the United States truly great? A little something known as separation of church and state. Because of this, state schools are not allowed to hold prayer sessions and the like - a wonderful notion and one which always makes me feel good when ruled upon sensibly by the Supreme Court.

But why, I ask you, are the words "In God We Trust" on its currency? Never mind the petty bickering over commemorative coins - why are they on the money people actually use? Why by Odin's beard are they the national motto, as declared by Congress in 1956? And please don't get me started on the horrific "Pledge of Allegiance". Are these things not clearly unconstitutional?

I think maybe PZ Myers has the answer, in his closing remark of the blog post that alerted me to this stupidity:

They're all demented fuckwits.

Friday, 1 February 2008

Science in the media

It has come to my attention that there remains much to be desired in the field of science reporting in the mainstream media (for which I am using the example of BBC News Online). While I'm all in favour of promoting enthusiasm for science, particularly in the younger generations, I do wish they would avoid the sensationalist wording with which this area has seemingly become saturated. At the moment, I have two recent articles in mind which demonstrate this; the first was entitled "'Bizarre' new mammal discovered", and is found here.

Obviously, as soon as I saw this headline pop up in my RSS feed, I was intrigued. The article failed to deliver, however; at first it seems to be claiming that nothing like this has been seen before. But as you read on, it becomes clear that the truth is far more mundane (though certainly big news and very exciting in itself, particularly for the researcher making the discovery). The "bizarre new mammal" is the 16th species of elephant shrew to have been discovered to date; it is distinct from the others by being slightly larger, and of different colouring. That's it. Hardly as impressive as the headline tried to make out.

The second article was a little worse: "Giant palm tree puzzles botanists". I was, once more, intrigued from the start; but already sceptical due to the "puzzles botanists" bit. It turns out that the tree grows to great size, then expends all its energy in an impressive flowering/pollination display, thus earning it the over-the-top moniker "self-destructing palm". Though this fact is the main focus of the article, the "puzzling" part only comes in a discussion of how it came to be there - which is soon explained by a perfectly plausible theory. Certainly the botanists wouldn't be puzzled by the "self-destructing" nature of this plant - it's far from being unheard-of.

I do wish the mainstream media didn't feel the need to dress up truly interesting news with sensationalist language - the bare facts in these cases should be enough to elicit fascination in themselves. In fairness to the good BBC, and in illustration of my point, I give you "Big mammals key to tree-ant team", a truly interesting piece of news, with virtually no "sexing up" involved. This is how it should be done.