Should LARP be not-for-profit?

If there is one oft-cited rule that almost all LARP organisers face it is this: LARP does not make money. In fact, scraping together the cash to ensure you have enough funds to run a future event, keep the group website registered and online, or to pay for prop storage when that convenient friend’s garage becomes unavailable is the constant worry of anyone trying to keep a LARPing group together.

There is evidence to the contrary (of course!). The professional ‘LARPwrights’ of Nordic LARP, the large festival systems that at least make enough money to pay their employees, the adept entrepreneurs who transform LARP into a training activity or even the savvy LARPers who run the same game twice to save on props.

But here’s the interesting question; is there something about LARP that would be ‘lost’ if events were run on a for-profit basis?

Technology, resources and learning

The role of technology in the learning process is something to which many hours of management consultancy time have been devoted in recent history. It is also something which academic institutions, sceptical at first, have increasingly embraced with open arms; as we are tweeted, podcast, prezi’d and interactive whiteboarded into a more enlightened future. Technology has nonetheless been a subject of study broadly within sociology for a number of years. Such studies have emphasised that the tendency to identify technology as an independent ‘actor’ outside of social relations is misleading at best (see Grint and Woolgar 1997). Rather, technology can be understood as an extension of the roles humans might perform, as a heavy weight on a hotel key ‘stands in’ for a door steward, or a ‘sleeping policeman’ prevents speeding by standing in for the real thing. Technologies are also not necessarily limited to artefacts. Winner (1986) argued that technology could be defined in three aspects; as apparatus, technique or organization, though admittedly it is difficult to distinguish these in practice. This is one expression of a broader philosophical argument which defines technology as an extension of human capabilities in both an abstract and practical sense (see Rothenberg 1993; Brey 2000), thus a bicycle extends our capacities for swift movement or a calculator extends our (individual) capacities for arithmetic. Based on this understanding of technology, many objects in common use for teaching and learning activities fulfil the definition. Some examples might be;

  • improved memory (often through artefacts of data/information storage and categorisation such as books or databases)
  • faster and more reliable calculation (through mathematical techniques algorithms and the encoding of these techniques into everyday items such as calculators or computers)
  • consistent, comprehensible methods of communication (the organization of the written word, in all its forms, represents one of the most significant technologies of our society, and is often manifest in software to support word processing, spell checking and so on)
  • extended voice (projection of messages across space and/or time through recording, telephony, radio, translation software et cetera)
  • extended social reach (adaptation of material for people of different languages or abilities, use of online networks to circulate material more widely or to collaboarate in virtual ‘classrooms’)

Such examples suggest that technology as an extension of human capacities offers powerful potential to improve the efficiency of the learning experience. Yet the technological objects only offer potential; capacities have to be realised. Unfamiliar technologies may be as great an impediment to the development of learning as no technologies at all where there is no support for accessing and learning how to use the technology (again, see Grint & Woolgar 1997). Artefacts often make no distinction between different types of user or student, and as students transition from routines familiar to sixth form colleges or other educational establishments to the less cohesive structures of diverse university departments they are likely to encounter significant changes in their experiences with different technologies. The learning ‘gap’ for each student with respect to making the best use of the technology, in my experience, will be different.

Generational distinctions are, in addition, often highlighted as a ‘gap’ between the cultures of the ‘out of touch’ academy and the ‘technologically native’ youth. By implication, engagement with up to date technologies by the academy offers a bridge by means of which students may be reached. However, my own (anecdotal) experience suggests this may well be mythical, since I have encountered many academics who are thoroughly competent in the use of the most cutting edge technology as well as regular Luddites among the student population.  In addition, this understanding of technology does not encompass the full range of learning technologies we have at our disposal: although we might often think of PCs and wireless internet access as the main ‘technologies’ used in contemporary learning, the way technology is defined encompasses a wide range of learning resources, including printed books and strategies of organizing learning time. This mythical simplification not only presents technology as a simple apparatus but in addition conceals the work undertaken to learn technique and build organization. Within the myth lies two significant potentials, one being the importance of breaking down barriers between the academy and the ‘real world’ (of students, not the same as the ‘real world’ of the employers/employed or of politics), and the second potential being accessibility of knowledge.

Do students care about learning technology?

This post was influenced by a key question in the national student survey (NSS); which asks students how well their university has provided access to resources. This concern fundamentally relates to access to knowledge. Writing, as the long lasting record and encoding of knowledge, is one of the most advanced technologies we possess, yet access to written records is becoming ever more complex. This matter concerns issues as diverse as the opening hours and physical book collections of libraries, and status hierarchies in academia informing the choice of subscription-restricted peer reviewed journals. In this way the written artefact becomes embroiled in more complicated networks of access which students have trouble navigating, either through lack of consultation or through lack of training in technique and organization of these materials. The accessibility of digital media is also relevant, as open access podcasts may be freely available, but only to those lucky enough or wealthy enough to have a reliable internet connection at home, since IT facilities on campus may often be full to capacity or in use for teaching. When I consulted with students on the problem of accessible materials, it became evident that the main concern of students was in navigating these complex circumstances, and the most effective and immediate solution for my teaching was to engage with a ‘low’ technology solution, that is, to provide guidance on accessing print books in the library, provide printed copies of notes and easy to print accessible PDF document links for core readings which would not take long to access or could be accessed using alternative electronic devices such as smart phones or tablets. This is not to suggest, however, that a ‘low’ tech solution is always best: to facilitate revision for students on the same module I have found developing a series of flashcards using the online website studyblue.com very useful as these can be easily printed or used via any mobile device.

The term ‘blended learning’ is frequently used to describe the application of technology to programme or session design, particularly the use of online delivery of materials or activities. While the definition of ‘blended learning’ is roving and contested (Oliver & Trigwell 2005), there are some discussions over whether this approach to teaching should also be considered ‘theory’. Considering the theoretical approaches highlighted in my previous entry on learning theory, the addition of technology to the process of learning seems to be one which makes no intrinsic assumptions regarding the learning process, though the application of certain technologies may indicate a sympathy with cognitive approaches by broadening student choice regarding the order of content, or with behaviourist approaches where technology is used to monitor assessment outcomes (formative and summative alike). To some degree, the discussion of learning technologies highlight the potential for adaptability, suggesting that the fundamental value of using such technology lies in the potential to accommodate a range of individual learning needs in a diverse cohort such as those posed by disability or diverse learning backgrounds. However, these rely on considerable evaluation of student’s requirements which if not conducted comprehensively may act against student’s interests as they are required to learn not only the content of a given session, but also a way of interacting with the technology. Oliver & Trigwell (2005) highlight that a significant limitation of the approach to ‘blended learning’ lies in an absence of analysis from the perspective of a learner but also that technology does offer the potential for varied experiences. Such varied experiences may contribute towards an enhanced learning experience, or place the student under an additional burden of isolation and estrangement from the rest of the class and the learning material. I feel it is important to recognise that for many students of differing competencies, too much application of ‘learning technology’ may present this risk; impeding their individual learning rather than facilitating it.

Looking at the theoretical confusion surrounding these issues, it seems that while technology may offer significant potential to improve the learning experience, it may also serve to confuse students and suffer from in-built prejudice regarding the access to knowledge. Consequently, I believe the implementation of new learning technologies should take account of student’s needs in a comprehensive way and they should not suffer the consequences of adoption before analysis.

The importance of feedback: Where and When students learn (to learn)

A reflection on a ‘critical incident’: an occurrence during teaching that encouraged me to reflect on my practice and assumptions.

Should the classroom be the ‘learning’ environment, or should ‘learning’ occur elsewhere? I have always felt that the traditional ‘chalk and talk’ method of lectures assumes the classroom to be the space where students are given a guided tour to the literature on their chosen topic, but that they need to visit those foreign shores themselves in that terrifying time often labelled as ‘independent study’. In a previous post I have suggested that there are particular theoretical answers to the ‘why’ students study (their motivations to learn) that are intrinsically linked to our views on ‘how’ they study. This post focuses on the question of and assumptions about where learning takes place. In many contemporary universities, students are now also being expected to learn ‘virtually’; in times and spaces facilitated by online materials and engagement with technology and social media. Following on from an assumption that all students are digital ‘natives’ already thoroughly engaged in an electronic world, I have a few concerns with the promotion of technological spaces as learning environments, though I will consider this in detail in a future post.

I teach a course in a business school which deals with theoretical areas of sociology; areas which students have little prior knowledge of and often limited awareness of the historical context in which the relevant ideas have been developed and applied. The size of the class varies but would often be considered ‘large’ for a humanities subject. Many of the students are from China, some from south-east Asia, a few European students and a mixture of students come from throughout the UK. A common learning or cultural background is therefore not something to be taken for granted. None have been required to have previous experience of social science subjects in order to enrol on the course, and many are simultaneously studying finance, economics or accounting. These subjects are not wholly quantitative, but that often forms a frequent part of their assessment and skill set in such programmes. Appreciating that the education system often filters students into ‘good at mathematics and science’ versus ‘good at humanities and languages’, I imagine many students of this background enter my compulsory course feeling at a disadvantage. Equally, the majority of students represent the 18-20 year old demographic, and will have recently come from a setting where learning occurs in the classroom, or in set assignments at regulated intervals. The notion of learning according to a timetable is imprinted, draconian fashion, in their institutionalised bodies.

This reflection is based on the following ‘critical incident’: In a large tutorial group (30 students), after going through discussion on that week’s assigned reading, I was explaining the criteria for the written assessment (a conventional academic essay). This assessment was based on the topics we had just been discussing, and students were concerned about the system of grading. I had provided an overview of how the assessment would be graded as a handout in the previous lecture.  One particularly adept student asked “if so much of the grade is based on study skills, why are you teaching us about these concepts and not teaching us to write essays?” I replied that the university supplied various workshops and activities to hone essay writing skills and that this was why I had been informing them about these workshops repeatedly at the beginning of lectures. However, the incident gave me pause to consider; students seemed to expect a ‘classroom model’ of total and complete learning delivery which was simply not the way I had planned to deliver the module, instead expecting and encouraging students to develop these generic skills through independent study.

This seems to suggest a conflict between a more ‘behaviourist’ model of learning expected by students (see Theories of Learning) and an independent or self-directed model which is significantly more ‘constructivist’ in thinking.  One development in ‘constructivist’ approaches to learning attempts to incorporate elements of practice common to the behaviourist model, and is regularly cited; Kolb’s (1983) model of experiential learning.

Kolb’s (1984) model of experiential learning  comprises a cycle of learning activity, whereby students progress between different types or styles of activity to learn from active engagement. On the horizontal axis, the model presents observation and action (very similarly to stimulus and response) while the vertical axis attempts to ‘fill in the black box’ with internal cognitive processes.

Kolb’s learning styles (copied from ruspat.wordpress.com)

This distinction of mental processes harks back to the Greek distinction between techne, or hands-on knowledge, and episteme ‘justified true belief’ (or abstract knowledge). This approach to learning further suggests that learners have particular preferences for different stages in this cycle, and so may be at their most effective in different environments. However, the entire cycle is the aim. The role of the teacher and the pupil, according to this pluralist approach, is more complex. The teacher is responsible for ensuring as many of the stages as possible are represented and facilitated through learning activities under their control, but it is the student’s responsibility to actually go through the process. While this process is likely to incorporate many aspects outside the teacher’s control (in particular, concrete experience), and even in the case of higher education outside the sphere of the degree programme, it is implicit in these theoretical approaches that student engagement will follow from appropriate course and activity design.

Following on from Kolb’s theory to return to my course design, I had expected that reflective observation and abstract conceptualisation were the tasks for the classroom, whereas concrete experience or active experience in social and business problems (as the topic of the course) and in reading and writing practices (as the medium of learning) were outside of my control. Although I was certain that work and life experience would be of benefit to students in better appreciating the content of the course, I was not sure if attempting to embed teaching activities on how to write into the module would in fact benefit students, or if my role was to attempt to communicate more strongly the extent to which they have to independently engage with the process of developing their writing skills.

Allan and Clarke (2007) discuss the struggles of designing programmes with embedded teaching of study skills over those without. In their research, they highlight a distinction between ‘generic’, ‘subject-related’, and ‘metacognitive’ skills.

Generic skills are those such as effective communication through presentation and/or writing, using information technology and working with others. The authors found that for some students, formal teaching of these skills could build confidence and improve expertise. Other students did not successfully engage with the activities, for various reasons all relating to the perception of the training as lacking relevance.

Subject-related skills are those which are directly related to the learning activities and assessments specific to the subject programme or module. So for this module that may include reading and comprehension skills (especially evaluating the meaning of the author compared to the student’s interpretation), essay planning and writing skills, notetaking, or producing answers under exam conditions. In Allan & Clarke’s (2007) study, for some students these were considered irrelevant or in some cases particular matters which were relevant to them were not available in sufficient depth.

Metacognitive skills are those which relate to the student’s awareness of their own performance and areas of improvement. In this sense they are directly related to assessment and feedback. Encouraging students to develop these skills related specifically to developing a more reflective awareness and personal development. Students on the whole engaged with activities related to this aspect of teaching.

Allan & Clarke (2007) advocate that following from their research, attempts should be made to embed the teaching of study-related and metacognitive skills within subject teaching. They further imply that this requires commitment from several lecturers across entire degree programmes. However, they also indicate that further research is required to identify if this is effective for students.

Considering the issue further:

Following this reflection, I have made strong attempts to incorporate some study-related skills into the course, although I have made a less concerted effort to explicitly address metacognitive skills. In order to develop reading and comprehension skills, I have provided more preparatory exercises such as questions for weekly readings which incorporate components from Bloom’s taxonomy.  Students are given access to online flashcard datasets which allow them to undertake multiple choice tests on these questions which incentivise their week by week completion and allow them to check their progress. Essay planning and writing skills are promoted through provision of resources and a specific session of relevant activities prior to the essay submission, but these are not motivated though clear (behaviourist) rewards.

Due to the limits on classroom time, few explicit sessions on developing metacognitive skills are included in the course. What the module does do at present is attempt to get students to develop these through implicit demands made of them in the classroom, such as asking students to relate their question answers to their own experience, or to things they may be familiar with from the news or even from popular fiction. There is also a more explicit session at the beginning of the course, prior to any teaching on the content, where metacognitve skills are taught more explicitly. Allan and Clarke (2007) suggest that these might be embedded in subject teaching, but incorporating them in a single module might be counterproductive due to the short timeframe (12 weeks) and repetition on other modules. At this point in time, I feel that a single session in the classroom with the option of further one-to-one discussion on specific assignments does well to support the development of metacognitive skills but without fatiguing students with repetition.

I do have concerns that to attempt to ’embed’ these skills in a module too strongly could result in overburdening the students or with over-assessment. Many of the generic skills are now being introduced as a compulsory part of initial study in a number of universities, but there is (understandably) no corresponding decrease in the expectations for academic content. The question of where and when students learn is also a key concern of applicants, often wondering where their money (from the increased student contribution to fees) is being spent. There is a competitive view on contact hours among applicants (and their parents) which seems to demand more classroom time, and which implies more responsibility for learning outcomes being attributed to the teaching rather than the learning effort. An attempt to placate these demands with unspecified additional time in a room with a tutor or some new nifty social networking learning platform without thorough consideration of where and when students learn metacognitive skills as well as content seems fraught with peril for any university.

Meaningful Work

Thanks to the ESRC Festival of Social Science, last week with the support of the New Vic Theatre, Newcastle-Under-Lyme I ran an event asking individuals to consider what they felt stood in the way of meaningful work. While there has been plenty of academic research into this topic, as well as related concerns about the quality of work in the form of ‘good’ or ‘bad’ jobs, the search for meaningful work as an academic topic and an everyday activity seems to fade into the background when many people count themselves lucky to be earning enough money to not need to rely on food banks just to get by.

The workshop was led by Sue Moffat, director of New Vic Borderlines and advocate of the use of theatrical techniques to get people to engage with each other and express their shared knowledge. As part of the workshop we played games to examine how we learn to trust people we work with, how a competitive urge developed, encouraging us to challenge some individuals and make alliances with others. We then talked about this as a group, exploring how important social camaraderie at work can be to make it a meaningful experience, or even how some types of paid work were only meaningful as enabling independence and freedom to do things in other aspects of life. We also listened to recordings about work, thinking about how the sounds and sensations of working could play a part in bringing meaning to a community as much as to individual people, and reflecting in particular on how the disappearance of those sounds and sensations could leave a feeling of loss.

Much of our later activity, building a narrative around images and objects in the theatre reiterated these themes about society, community and individual approaches to meaning. Using large metal frames we entangled teacups and wallets, stethoscopes and teddy bears. A story of the voyage towards meaningful work was written, considering the importance of the crew aboard the vessel, the storms and dangers of the deep seas, the provisions needed to survive the trip, and the search for dry land. While these metaphors may seem fanciful, they allowed everyone participating in the workshop to easily explore their shared experiences based on how they interpreted these objects and events. Throughout, we discovered that meaning was elusive, and could be challenged or built through our relationships with others. We explored how many of our everyday frustrations with work were those which challenged its goals or meanings, and how the money obtained through paid work was not enough to fulfil our desires for a meaningful life, and for meaningful work to occupy it.

For more information about the New Vic Theatre, follow this link.

This event was followed by an evening discussion about what business can do for society, hosted by Keele University Management School. There will be a follow up post on this next week.

Where is the dignity in suicide?

"Woodcut illustration of the suicide of Seneca and the attempted suicide of his wife Pompeia Paulina - Penn Provenance Project" by kladcat - Woodcut illustration of the suicide of Seneca and the attempted suicide of his wife Pompeia Paulina. Licensed under Creative Commons Attribution 2.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Woodcut_illustration_of_the_suicide_of_Seneca_and_the_attempted_suicide_of_his_wife_Pompeia_Paulina_-_Penn_Provenance_Project.jpg#mediaviewer/File:Woodcut_illustration_of_the_suicide_of_Seneca_and_the_attempted_suicide_of_his_wife_Pompeia_Paulina_-_Penn_Provenance_Project.jpg
“Woodcut illustration of the suicide of Seneca and the attempted suicide of his wife Pompeia Paulina – Penn Provenance Project” by kladcat – Woodcut illustration of the suicide of Seneca and the attempted suicide of his wife Pompeia Paulina. Licensed under Creative Commons Attribution 2.0 via Wikimedia Commons 

The Campaign for Dignity in Dying is gathering momentum. It has many well known advocates, including prominent actors and authors who promote the cause of physician-assisted suicide.  Lord Falconer’s Assisted dying bill is soon to be discussed by a Lord’s committee, and polls indicate that the right to end your own life when sufferring from terminal illness is supported by many of the public. I disagree with this, not because I think terminally ill individuals should be forced to live with sufferring, but because I feel this is detrimental to how we recognise individuals’ dignity and worth  in our society.

What is Physician Assisted Suicide?

Suicide needs very little definition, it is the act of ending one’s own life. Theoretically, this is an act of complete free will and has a history of religious condemnation. Much of the debate around ‘dignity in death’ stems from the challenge presented by degenerative illness. In most degenerative illnesses, individuals will reach a point where they lose their capacity to act, though they may not have lost their intellectual capacity to make and express a free choice. It is for this reason that families and individuals are campaigning for ‘the patient’s right to choose’ the place and time of their death by de-criminalising the act of a physician allowing their patients access to fatal drugs, and in the case of patients in an advanced stage of degenerative illness, the involvement of a physician or family member in the administration of the fatal drug.

In most countries around the world, access to definitively fatal substances is highly regulated, and they can only be accessed by medical professionals. Yet the long standing ethical practices of the medical profession, to first do no harm, as well as the legal dangers posed to the individual physician mandate against their involvement in cases where patients ask them for help to die. In addition, if the physician is the person to actually insert the needle or push the button there is very little to legally distinguish between medicine and murder.

Yet there is an argument that when a patient is certain to die, or can only survive with the minimal of life quality (as in individuals only alive due to life support machines with no hope of recovery), that such an action is compassionate. In such cases it is only the medical professional, with access to appropriate substances, who can enable a pain free death with the least amount of sufferring. It is also implied that it is only thanks to techniques of contemporary medicine that such a prolonged life span enduring degenerative disease is possible, and therefore the responsibility for any non-intervention or withdrawal of care leading to death also lies with the medical practitioner.

Rights and Duties

Do we have a right to choose? The debate over assisted suicide is based on a deep underlying assumption that we do, yet this is problematic not only on religious but also secular grounds. First there is the matter of scale, if we ought to have the right to choose the time and place of our death in the case of degenerative disease, then we should recognise that death is inevitable and therefore such choices ought to be valid outside of those circumstances. On religious grounds such decisions are a rejection of life and thus sinful or likely to provoke sufferring. In a secular vein, decisions taken to end ones own life when young and healthy are categorised as abnormalities, as expressions of sickness themselves.

A further conflict lies in the very idea of individual rights and the relationship between the individual and the body. It is a quirk of Western legal thought that the body and mind are understood as separate things. Although the embodied person may have rights, those rights do not include ownership of another body (which would be slavery) or of one’s own body (in the sense of exchange, so to sell body parts or the use of one’s body is also often illegal).

Finally, the concept of rights identifies the individual as a separate creature, detached from ties to others. The rights of the individual are often held as paramount over the obligations, duties or other responsibilities between that individual and others. The concept of the individual as a being apart from their social ties is a foundation of rational legal thought, but it omits our emotional interconnectedness as human beings as well as our role in enabling the capacities of others. Although suicide may appear to be an expression of free choice, most individuals see suicide as a last resort, where all alternatives for meaningful life are denied them. Perhaps we should be doing more to explore these alternatives.

The Individual Citizen

We live in a society dominated by thinking that concentrates on a model of the individual. Assisted suicide may alleviate the pain of the patient, but how does it affect their family, friends, neighbours or co-workers? Aside from the concerns of the family this is a debate not often explored, primarily because we already segregate the sick from the healthy in contemporary society. Sickness prevents participation in work, often requiring residential care in a hospital or other institution. Built environments primarily cater to the healthy and able-bodied individual, further excluding those sufferring from illness from full participation in society. Might this be a part of why these individuals feel they simply cannot experience a worthy or dignified life as their illnesses progress?

Dignity and sufferring

While the aim of the dignity in dying movement implies restoring control to patients and allowing them to experience less pain and sufferring, the model of physician assisted suicide has already been tested. In research conducted on Oregon’s legalisation of physician assisted suicide, it has been noted that individuals’ sufferring and requests for assisted suicide were based not on the pain experienced by the illness, but rather as a consequence of social exclusion.

It’s not that I disagree with the need for terminally ill patients to experience death with dignity, but I do think that lethal injections, access to drugs or the rights of the patient is not the main issue at stake here. What potential we offer the sick or the sufferring to engage with a meaningful life is at stake. How should we amend our laws and regulations to support people sufferring from illness in a society dominated by a drive for economic productivity that pursues segregation between the productive and the unproductive?

 

 

Theories about Learning motivation and practice

Following a recent post on a friend’s blog about undertaking postgraduate certificate qualifications in teaching at university, I thought I would start the process I have been promising myself I would do for months now: publish my blogs on learning to teach. NB: some of this material has been recently submitted for assessment purposes, enjoy the read but don’t quote in your own teacher training programme!

I formally started the teaching at university programme about six months after I began working at my current university. Unsurprisingly, like at a lot of universities I have heard of, the programme was not held in high regard by academic staff, mostly because they were compelled to undertake it and had developed (over the course of the PhD or over many years of research focussed work) some cynicism towards the programme tutors. Broadly, this cynicism related to three factors; (1) a belief that students who are motivated to learn, do, regardless of  techniques applied by lecturers, (2) a view that programme tutors did not sufficiently account for the constraints on lecturers following from large class sizes, limited resources and bureaucratic impediments to change, (3) skepticism about the political aims behind the programme and whether this signified a move to a ‘customer oriented’ model of teaching that fundamentally undermines the authority of the lecturer as ‘expert’. Following from this third element was a critical attitude towards the political status of universities in the UK and the consequences of changes to student fees and recruitment in the most recent attempt to create a higher education ‘market’.  But I’ll come back to this issue in a later post.

Today’s post focuses solely on point (1): theories about learning and the motivation to learn, and summarises two broad theoretical approaches; behaviourism and cognitivism. What is interesting is that each model has a different role for the teacher, and requires them to engage with the students in a different way. Each also suggests that different rewards or learning environments will produce varying results in how much and how well students learn.

These approaches to the study of learning have much in common with the fields of psychology and social psychology generally, and as such I have been a bit sweeping in my assertions which follow. Each has it’s historical place in influencing learning institutions and systems, and consequently some aspects of learning, teaching and assessment that are often taken for granted can be linked to different parts of these theories.

Behaviourism: looking at external action not internal subjectivities

Behaviourism is one of the earlier approaches to learning, drawing on the notion that since the internal workings of the mind are objectively unknowable then only the external factors can be studied. “Learning is defined simply as the acquisition of new behaviour” (Pritchard 2008:6). Central to this is the basic premise that all creatures respond to stimulus to increase positive experience and decrease negative experience. Central theorists include Watson (1958), Skinner (1953) and Thorndike (1966). Historically, this approach to the study of psychology was particularly functionalist, and much of the research in this area focused on ‘conditioning’ subjects into a particular habit of response. You might have heard of famous examples of this sort of research such as Pavlov’s dog experiments, where dogs are trained to associate the noise of a bell with food, such that eventually, even when the food is not present, the sound of the bell will make them behave as if food is present.

While many conditioning experiments may seem crude, or even laughable, by today’s standards, they were incredibly influential in their practical implications. However, the perspective was not universally well-received, as it placed human beings in the same category as any other kind of animal. Skinner’s (1971) Beyond Freedom and Dignity is a particularly vehement response to his critics, arguing that humans had to ‘get over’ their belief in their own special status if society was to be functionally improved. This experimental approach was also criticised for oversimplifying the study of behaviour (see Eddie Izzard’s sketch about Pavlov’s cat for a laughable example of what happens when not all variables are controlled)

Based on a simple view of student motivation as merely learned response to stimuli, learning approaches that adopt this view might be summarised as ‘stick’ or ‘carrot’ techniques. Approaches as different as the Victorian ‘spare the rod and spoil the child’ and contemporary practices around the need for ‘positive feedback for psychological engagement’ all fit in with this approach. Any focus on rewards for correct behaviour is underpinned by behaviourist theory, whether it is a directly ‘conditioned’ response or a ‘shaping’ (using goal-setting approaches) towards ideal behaviour.

Limitations to using a behaviourist approach to designing learning activities are usually listed as including a limited or ‘surface’ approach to learning, as the desired response could be produced without developing an understanding at a ‘deeper’ level; it is limited to rote-learning (Pritchard 2008).

An interesting part of behaviourism, however, is that it places the responsibility for ‘correct learning’ directly upon the teacher, provided the student complies with the system. It is the responsibility of the teacher to identify desired behaviours and reward them appropriately. Additionally, students may have come from schools or colleges that use this sort of approach, and therefore to an extent are already ‘conditioned’ to expect this sort of learning activity and reward.

Cognitivism (or Constructivism): Looking inside the black box

A different approach to learning is apparent in cognitivism. Focusing on the workings of the brain from multiple different perspectives, cognitivism gives primacy to the idea that learning is an internal process. Much of the research on which these theories are based comes from developmental studies with children, or with those suffering from developmental difficulties. The underlying principle contends (against behaviourism) that learners are active agents in the learning process, and that learning should be approached in a holistic manner (this is associated with ‘gestalt‘ theories). This suggests that students respond to patterns as much as to individual stimulus.

Many different approaches tend to get clustered under the cognitivist label. Two early theorists in the area are Piaget (1926) and Vygotsky (1978). While both share similar principles, they do differ in terms of the priorities they give to particular aspects of the learning experience. Vygotsky’s approach (ibid) focussed on the social interaction between teacher and learner, stressing that it is within that relationship that the teacher can help provide a framework (and break down earlier frameworks) which the learner then strengthens and models for themselves. Piaget, by contrast, stressed that the learner engages with artefacts provided by the teacher independently and develops knowledge which is incorporated into schema (a sort of subjective framework, see Smith, Dockrell & Tomlinson 1997). Both theorists stress the significance of activity undertaken by the learner alone or with the teacher as a key part of the process (Jarvis 2003).

Compared to the behaviourist approach, the constructivist approach as a consequence of a more subjective understanding of learning (by experience) tends to offer a view of learning which allows pluralistic versions of knowledge (i.e. there is space for more than one ‘correct’ answer or way of doing things). By contrast, the behaviourist view presents a much more rigorous position on what does and does not constitute legitimate knowledge that indicates a one-way transmission of that knowledge from teacher to learner. Both different approaches also commit to different priorities and techniques for the design of the teaching and learning environment. Clearly, certain training programmes may tend towards the behaviourist perspective, as some interpretations or behaviours are considered illegitimate, misguided, or even dangerous, whereas disciplinary areas more tolerant of pluralism may be more inclined towards a cognitive view.

A synthesis of constructivist and behaviourist theoretical leanings is apparent in the majority of current approaches to institutionalised learning, perhaps thanks to inherited behaviourist systems of the past, or the failure of cognitivist learning experiments to revolutionise teaching styles. One frequently-used reference point which demonstrates this is Bloom’s (1956) taxonomy of (cognitive) knowledge[1]. Bloom’s taxonomy presents multiple ‘building blocks’ as a progressive hierarchy of knowledge attained through learning where the achievement of each stage requires proficiency in the stage below (this strongly informs international comparison standards regarding the level of achievement in particular qualifications) .

bloom

The original presents a continuum which presents a programme suitable to behavioural ‘shaping’, but also stipulates the cognitive activities it is expected that students will undertake. Bloom’s framework was revised in 2001 in order to more comprehensively represent changes in educational language and to incorporate the type of knowledge the student is expected to master (factual, conceptual, procedural and metacognitive), as well as the cognitive process they engage in to do so (Krathwohl 2002). There have been some critiques of Bloom’s taxonomy, however, which suggest that the hierarchy of cognitive approaches may be reversed, and that the production of knowledge in the form of ‘facts’ is a hard-won outcome of the other processes (Wineburg & Schneider 2010). After all, in scientific endeavour, that is how research produces knowledge!

Wineburg and Schneider’s (2010) argument could be seen as a revisit to Bloom’s framework which highlights a shift away from behaviourist models of learning towards cognitivist approaches.  A behaviourist approach to learning, with its focus on stimulus-response-reward, privileges a basis in the accumulation of facts through rote learning followed by study in the skills of manipulating those facts for logical analysis and evaluation. In this presentation of Bloom’s taxonomy, the teacher provides students with ‘legitimate’ knowledge in the form of facts, then slowly leads them through a process whereby each stage in the process is reinforced through reward, often in the form of good test marks though also sometimes using more mundane rewards (such as sweets or book tokens). Wineburg and Schneider (ibid) argue that the taxonomy may instead be represented in the opposite direction, where knowledge is the outcome of the learning process rather than its base. This derives from a more constructivist approach which builds upon the notion of the learning ‘scaffold’ (see Sylva 1997).

 

[1] It is important to recognise that the committee of which Bloom was head intended to encourage a synthesis between three different types of learning; cognitive, affective and psychomotor (see Krathwohl 2002). I have rarely come across discussion of the latter two dimensions at university, which may be instructive in how far such discussions have penetrated in the educational domain.

 

 

 

Epistemology and the study of games

Some of you might know, I recently attended a conference in Cornwall where I presented a joint paper with a friend and colleague based on her work on Cthulhu horror LARP. The conference was interdisciplinary, with a keynote speech from a renowned Medieval Historian and we both had a fabulous, if tiring, time. In the same panel as our own paper, there were two papers on horror themed computer games, and it was interesting to see how these were also being theorised. This post presents a bit of a rant about how these are studied but I also highlight some of the useful overlaps between the study of computer games and the study of LARP.

In the past I have dabbled in reading about studies of contemporary computer game RPGs and classic MUDs and MOOs (basically multiplayer text-based gaming). However, I often find the claims made about the player experience are based on little more than the imagination of the researcher. While this kind of thing might be fine for a games reviewer, I tend to feel that university researchers are obliged to do a bit more work than that, or at least be honest about the limits of what they are claiming. This is due to different opinions on, or confusion about, epistemology.

So, for non-philosophers, here’s the cheat sheet:

ontology = the study of what exists.

epistemology = the study of what we believe, or can know.

Questions about ontology, what exists, are usually for all practical purposes, simple. This campsite exists. My tent exists. The rain exists and if I don’t get my tent set up soon all my equipment exists and will get pretty wet! The problem comes in when we start talking about individual or collective experiences or symbols. For example, my hardware exists and is downloading the newest patch which will then allow me to get around the DRM and play the game I’ve purchased.  Well, the concept of ownership and digital media is a bit ropey at best, as peer-to-peer filesharing has highlighted. And is an experience a game if it feels dull and monotonous (regardless of whether it’s packaged in a shiny box)?   These debates start to cause problems for our certainties about what exists, because we cannot be certain in our epistemology – what we can know.

If you are having trouble following at this point – swallow the red pill. This illustrates the problem of ‘Descartes demon’; someone or something (like a demon, a cat, or a race of intelligent machines) could, unknown to us, be interfering with our perceptions of the world. And even if there is no interfering demon, this example implies that we cannot trust our own senses 100% of the time anyway. How we interpret what we see is based on our existing frameworks of knowledge and language built over time and experience. It is either really difficult or impossible to imagine our perceptions of reality outside of that experience. So the position most scholars of social science take on this is somewhere between ‘really difficult’ and impossible’.

If your position is ‘really difficult’, your solution to this problem of epistemology (which you have to come up with, otherwise what would be the point of research) is to find techniques to improve the likelihood that your study is an accurate study of what exists (such as running your experiment many times, or comparing your findings with multiple other scholars). If your position is ‘impossible’, then you basically accept that you can never know what exists, but only what you think exists, and you limit yourself to the study of that. Very few scholars are this far down the spectrum, but they might, for example, limit themselves to the study of ‘my experiences of gameplay’ rather than ‘gameplay’. You then have to address a further problem; is what you think the same as what everyone else thinks? This is the question of epistemology in social science, because it basically screams ‘am I doing anything useful?’  Again, it can be quite simple when we are looking at the uncomplicated things the world often seems to be.  Does that look like a wasp over there? Yes, it’s a wasp, I agree. Okay, based on our compared experiences/perceptions of the world, let’s stay away from it then!

But what about if you have never seen a wasp before? Or been stung by one? What if different people have different ways of seeing and interpreting the world based on their experience? Well that makes it difficult. And this is when both individuals are supposedly sharing ‘the same encounter’ with the wasp.

If you are studying a game, or any social experience, it is maybe okay to assume that most people will share some common cultural references or models. Ideas that seem ‘natural’ among a particular group, culture or society. However, it seems like a bit of a leap to suggest that the audience of gamers act like a sponge, absorbing the game experience as designed. We might instead agree that their individual experience will be specific to them as an individual. So studies of a game or social experience need to be based on information about that experience, collected by doing it, observing or questioning the people who do. And subsequently, what we can claim to ‘know’ about the game, needs to be acknowledged within those limits or compared across a broad range of gamers experience.

So, in my personal approach to epistemology, I have written about LARP based on my experiences and on those reported to me by other participants. I do not suggest that this resembles the definite or common experience of all LARPers. But there are (at least) two parts to a LARP game, and people have written a lot about this. There is the story, and the gameplay. There is what the organisers try to make happen, and have players experience, and then there is what they experience. Many different things influence both of these dimensions.

In discussions of computer gaming, there is the same acknowledgement of the importance of the game narrative (studied by narratologists, sometimes referred to as the diegesis or diegetic frame), and the game design (studied by ludologists).

This is a simplification, but for the sake of this (long) post let’s keep things simple. Narratologists broadly claim there is no difference between games and storytelling, and therefore no meaningful distinction between oral epics,  printed novels or point and click adventures. They argue these can all be studied using theories traditionally applied to narrative. Ludologists argue that the ‘story’ part of the game is just the icing on the cake, and what ought to be the focus of study is the rules and mechanisms of the game.

It seems that both of these approaches focus on the game itself as a real thing that exists. Or at least, the focus is on the created narrative as a cultural product, or the set of rules as an algorithmic product with multiple possible operations. I am perfectly happy with studies looking at this, but where I get twitchy is when either side starts to make claims about how players experience the narrative/ludic elements without a clear statement that outlines how the problem of epistemology has been overcome here. This requires some sort of claim about what we can know about players (by being one, observing one or asking one). But the interesting thing is, the relationship between game and player is not a simple one of design and receipt (and most scholars of games do acknowledge this). No game is thrown out into the world on a ‘take it or leave it’ basis of meaning or interpretation.

So let’s go back to LARPers again. There has been a bit of debate among LARPers about how a game operates, rules, story, and the difference between ‘Roleplayers’ and ‘Powergamers/players’. It raises its head in discussions around Player versus Player elements of games most frequently. And in such discussions there is a lot of awareness that the people who write or design the games are players too, and players switch between their focus on story and on gameplay. There’s even a sort of complex cool creative  doublethink between being your character among your enemies and being a LARPer hanging out among your friends.

So in this blog post I have included multiple hyperlinks to demonstrate the cultural codes and references I am thinking of when I use some of the terms here. But I’d like any readers to comment on whether they think that simply by adding these connections I am restricting or enhancing your diversity of (narrative or ludic) experience in reading this post.

 

tl:dr IMHO studies of games should look at what the players actually experience, not just the story or gameplay design. Studies of computer games distinguish between ‘plot’ and ‘game mechanics’ just like big debates in LARP do, but they could learn a few things from LARP.

Monstering: changes in the air

It has been a really long time now since I attended a fantasy LARP. Well over a year, and unfortunately my work and personal commitments this year make the outlook bleak. I missed much of last year due to personal and wedding plans,  and subsequently I’m a bit out of the loop on what is going on in our ‘finely woven webs of magic and belief’! I hope to attend 2-3 events later in the summer though, so hopefully we will have fabulous LARPing weather!

So this rather explains why the blog has remained in stasis for so long, but there are new entries to come! In this entry in particular, I have recently noticed that this year seems to be shaping up to be the year of controversy over monstering. So, for the non-LARPers out there, monstering is basically being the helpers, crew or bad guys in any given event (see my previous post). Monsters traditionally participate in events for free, and recieve small benefits in return: this is where controversy is emerging, as some events are beginning to request small fees from monsters to secure a place, or promising bigger rewards. There are always concerns for organizers about monsters, for several reasons;

1) monsters are a cost

Most sites have a per-person charge, or a scale of charges based on occupancy, so the price of tickets for players will always be directly or indirectly affected by the size of the monster crew. Even for the rare event which is being held on an open site, public liability insurance charges also scale on a per-person basis (usually at 50 participants, 100 participants, >150 participants basis though this varies). Keeping costs for players low therefore will always rely on having an effective and appropriately sized monster crew.

2) monsters are needed

A good quality event relies on good monsters who are experienced, informed and enthusiastic. Including organizers in the category of ‘crew’ here, it is simply impossible to have an event without them. It is also true, however, that player expectations in fantasy LARP are seen to demand fewer low-activity events where little effect can be made on the world, and more open-world events where players have free choice to engage in different aspects of the plot or storyline. These type of games require more props, bigger sites, and more monsters.

3) are monsters motivated?

Following the above very significant points, most participants (whether players or monsters) know that enthusiasm and contribution to the event can weigh much more than money. An eager monster who finds some great costume in a drawer and brings it along, a group of friends who come along as a group and can work well together to portray a military unit or even someone who gets enthusiastically stuck in to whatever job needs doing (even making the tea!) is an incredible contribution to the success of any event. Motivated monster crews are also important to increasing player numbers, because many people get their first introduction to LARP through monstering an event.  Yet this is a completely unpredictable element, which may rely fundamentally on any variety of possible causes, so may be nerve-racking for the organizers! There are little things that organizers try to do to improve motivation, including providing tea, coffee and sweeties, priority bunks, experience for your player character or other incentives, but these often include costs which need to be outweighed by the benefits. And there is always the danger that these incentives might drift into ‘payment’, resembling the feeling of work (see below).

 

So that explains why organizers might have to deal with conflicting ideas about what monsters should be expected to give or pay, and how much/whether they should be rewarded. Yet there also seems to be a problem for monsters around obligation and enjoyment which overlaps between the hobby and other commitments.

4) How much does it cost?

People volunteering to monster an event may well participate for ‘free’ but may have to pay associated costs of transport, catering, accommodation and equipment. These are the same costs that might be a part of playing the game, but with no guaranteed level or type of enjoyable participation in the game, and less leeway to ‘make your own fun’ these costs may seem more significant.

5) Am I having fun? (is this like work)

As a player, it’s easy to choose your own preferred style of play. Personally, I’ve always enjoyed playing very minor monsters; the squishy one-hit-goblin type who is destined to lose (as monsters are, unlike some amazing one-hit super-goblin players with magic swords I could mention). However if you prefer a competitive playing style, taking on roles where you have no chance of winning is not going to be particularly enjoyable. In addition, many of the other tasks that might be necessary as a crew member can be draining and mundane; too much like hard work rather than fun. Even an unlimited supply of sugar and caffeine can sometimes be a poor substitute for enjoyment.

6) Do I have to be here?

As paper bookings gave way to email and online forums have become wider through social media such as facebook, there is in some ways a stronger sense of a LARP community. But in some places this seems to put a serious (stated or implied) obligation on regular players to participate as monster crew or risk losing their hobby altogether. There is an equally strong tendency to report on events as they happen, emphasising what is sometimes termed FOMO (fear of missing out). Also, a wider reach of advertising about events puts more pressure on players and monsters to attend more events, and increases demand for experienced monster crew (including referees and organizers). This presents monstering as a more serious obligation, as a necessary way to maintain the community, adding a level of pressure which may simply override a decision to participate on other grounds.

These pressures on monsters and event organisers are hardly new. In addition, there have been a number of events in the past which have been so popular to monsters and players alike that these grievances have been shown to be insubstantial. But in the circumstances of rising site costs, rising transport costs, dropping player numbers and more significant ‘real-life’ demands, these problems seem to be getting squeezed from both sides.  Of course, this is only a rough summary of debates I have seen elsewhere and I am only adding a little information drawn from wider debates around conditions of economic life in the UK to spice up the discussion.

What has your experience been? As a monster or organizer what is your best experience of an event? Or the worst?

Comments especially welcome to this post!

 

 

The blog is dead, long live the blog…

After several months without a post, I have finally accepted the inevitable, that I simply lack the discipline to commit to a regular blog on a single topic every week. I have therefore decided to resurrect the blog by incorporating more of my writing activity on other topic areas, including reflections on the everyday aspects of academic life and research writing on other topics.

Recently, there has been a rush of interest in the Treasure Trapped LARP documentary and the Scandinavian LARP Panopticorp. I still find these things interesting and will blog about them where possible. This week I have mostly been reading up on social science fieldwork and the production of ethnography. Ethnography, generally speaking, is an attempt to study and portray cultures and sub-cultures. Journalist writing such as Lizzie Stark’s book is one of the areas in which the academic and the popular overlap, and this can be considered a sort of ethnography. My fieldwork reflections on LARP always came from the perspective of being a LARPer first, and social scientist second, so because of that the tales I can (or am willing to tell) are from a more native, and in a sense less ‘scientific’ perspective. However, I did use techniques to try to create a bit of distance between my experience and reflection, and it’s techniques I see role players using all the time (and if you check out the Panopticorp video you will find them there). One technique is to imagine explaining your actions to a very different audience (and people may distinguish between character roles and players here). Another is to try to closely examine the emotions experienced during and after the game, especially reflecting on times when you were just in a good ‘flow’, ‘in the zone’, or ‘effortlessly in character’. Personally, I especially find this an occurrence in horror LARP.

So I have found the Panopticorp video interesting, in particular because the player’s reflections have made me think a bit more about what I take away from a game besides whether it was fun or not. Also, it seems to be common practice in ‘Scandi-LARP’ to have these debriefing sessions both during and after the game. These seem to be really valuable to players and to game organisers, but I also think it’s important to stress the overwhelming preference in UK LARP for action. While I may well write a future post on this at length, some readers might want to look at this blog where the author reflects on the sheer beauty of doing LARP.

Shared Fantasy: Live-action versus technologically mediated hyper-reality

Hello world. It has been some time since I had the leisure to post. But now I have a wordpress app! It’s the future. Jetpacks. Robot servants. A life of opportunity. Utopia or dystopia?

The fictional stories I read in the 1980s promised technological wonders such as these, and none so wondrous as the idea of virtual reality. Whether you remember the headsets of the 90’s or the holodeck from Star Trek, the notion of advanced technology blurring the line between what was ‘real’ and what could be experienced as real was a topic of much excitement and possibility. Of course, so was teleportation, but this post isn’t about that.
What I have been thinking about is the distinction between ‘swords and sorcery’ style LARP and the recent popularity of ‘augmented reality’ games made possible by the popularity of technologies like the iPad or smartphone. These games need such technology to create a consistent game world in a way that messing about in a field with some foam swords does not. Yet even foam swords and costumes are products of technology, artifacts that ‘mediate’ our engagement with our imagined world.
So I wonder about the role such objects play in our ‘pretend’ world compared with the ‘real’ one. Food for thought… (To be continued….!)