What to do in Lectures: a guide

It’s that time of the year again and campus is filling up with fresh-faced undergraduates wondering just what they’ve let themselves in for. The more confident second year undergraduates are returning from their holidays, looking forward to seeing friends and perhaps a little worried about the fact that their second year is beginning and the work ‘counts’ now (as it contributes towards their degree classification). So for both the newbies and the experienced students now is a great time to get prepared for the sessions ahead. But, really, what are you actually supposed to do in lectures?

hint

I’m going to ramble about this, but for those who’d prefer a one-page graphic guide I have taken inspiration from my friend Matt over at Errant Science and made you a comic. First of all, let me introduce you to my comic self…

me

 

Hi there!

In a traditional lecture, an academic will spend most of the time talking to you about a specific subject in which they have expertise. We like to talk! But while we talk, what do you do?

The point of having a lecturer is that they are a subject expert, and as such they have lots of information and expertise that it would take you years to read up on. Think of them as being like a knowledge funnel, condensing all of that information down into a smaller space (and time). The problem is, that in many university degrees (and almost certainly in the lecture) you won’t be using that information straight away, so it can be hard to absorb.

You might have heard about learning styles – the idea that some people learn better by listening, or reading, or drawing…. that’s actually now been shown to be incorrect. Though you might have a preference for the way you like to be taught, you mostly learn the same as everyone else – by problem solving. Human beings are hard-wired problem solvers. But when the problem isn’t immediate, it can be hard to understand what you should be doing while your lecturer is there at the front rambling away!

But actually, everybody there does have a problem to solve – how to get a great degree! Often, this also includes an ambition to get the knowledge you need for a great career afterwards too. And to attack these problems requires a more focused approach in your lecture. Your immediate problems to focus on are:

  • How can I pay attention throughout this lecture (especially if I’m really sleepy)?
  • How can I transform this lecture into a record I can learn from?
  • How can I identify the most important information in this lecture?
  • How can I work out what areas I understand and where I need to ask questions to make sure I will do well in my assessments?
  • How do I come up with the right sort of questions?

 

tech

For most students, the wonder of technology seems to promise an answer to many of these questions – after all, the lecturer has provided powerpoint slides or notes that you can download, right? Also, it’s pretty easy to use your smartphone to record what they say!

Unfortunately, powerpoint is not a great resource to learn from, especially as it’s a pretty poor format for communicating complicated or non-linear ideas. Also, as it’s such a boring format, it’s more…. likely….. to………..zzzzzZZZZZ

Oh, is that the time? Sorry, I was snoozing there for a second.

The best thing you can do in a lecture is use techniques to help you engage with what is being said. One such technique is taking notes! Taking notes will help you pay attention and create great personalised records for you to learn from. If you use a method such as the Cornell Method presented here, it will also help set up your learning activities to do after the class time is over.

The Cornell Method relies upon you taking written notes, but helps you use a standard format to organise the page to encourage you to 1) create a summary of what you hear, 2) pinpoint key ideas and concepts by looking for verbal or non-verbal cues such as repetition or gesturing, as well as flag points you don’t understand so you can ask questions about it at an appropriate time, 3) collate your key messages together from each page in preparation for your follow-up work.

Organising your page according to the Cornell Method is really simple.

page

In Part 1, the main section of your page, you should aim to make comprehensive notes on what is being said according to what you hear from the lecturer (which may or may not reflect what they have put online). You won’t be able to capture every word, so abbreviate and focus on things that are repeated, emphasised with gestures or tone, or which seem to form the central or most significant points of the discussion.

In Part 2 of your page, the side column, you can note slide numbers or references, so if a section of the lecture refers to a specific reading or theory you could mark this next to the section you have written on it. This makes it much easier to review these notes later. You could also put question marks next to parts that confuse you, or that you might need to investigate further.

In the bottom section of your page, Part 3, you should leave blank during the lecture to give you space to go back and review your notes after the lecture is over. This will help you see the ‘bigger picture’ and may help come up with questions you need to ask your lecturer or tutor. It’s also a really useful space in which to summarise the lecture or section of your notes so you can find relevant material to prepare your assessments or revise for exams!


Read More »

Advertisements

Are we adventurers in Platform Capitalism?

A review of Srnicek, Nick (2016) Platform Capitalism, Polity Press: Cambridge

 

At 171 pages and only three chapters, Nick Srnicek’s book is a brief and digestible entrance to the shifting territory of an increasingly digitally-mediated form of economics and labour that is beginning to be debated under a diversity of terms, including the ‘gig economy’ or ‘the fourth industrial revolution’. In particular, I had high hopes for the text as a way of catching up on debates on the social impact of technology on work, the changing conceptualisation of capitalism as the free-surfing internet age has transformed into the ‘app for that’ age of smartphones and social media, and possibly the way in which this has impacted on our notion of value in a global economy. Unfortunately I have to admit that I found the book disappointing in these areas, particularly considering that the content of the BBC’s Thinking Allowed interview was considerably more thought-provoking.

Overall, the book focuses mainly on the context of the United States, appropriate considering the location of many tech headquarters in Silicon Valley, California and their historical role in the development and emergence of new digital technologies and in the promulgation of alternative business models for technological enterprises (most notoriously in the unsuccessful dot-com boom and bust). The first chapter of the book paints an abbreviated historical picture of shifts in the regulatory and economic context affecting business (mainly manufacturing) from the 1950s to the present. This focuses primarily on the role of government investment, accessibility of venture capital and economic interventions such as quantitative easing and how these responded to and effected change in corporate strategies. While the chapter highlights the impact of changing economic environments in heightening global competition, I would have liked to have seen a more explicit statement here on the author’s theoretical position on the source of economic value. While the focus on the United States may have been appropriate to the book’s intended audience, I also think this omits important reflection on the economic transformations in India and China which are of significant importance to any analysis identifying outsourcing and technological transformation as key to it’s historical arguments.

The second chapter sets out to consider whether we are living in a new age of capitalism, defined by the new technologies supported by extensive smartphone use. In the first few pages of this chapter, the author skims over a wide range of debate regarding how we theorise the source of value in contemporary capitalism, and while there is some further discussion in the notes the limited presentation of this debate was disappointing. Briefly alluding to Italian autonomism and debates on collaboration and knowledge as a source of value, the author also speeds past the contentious debate regarding immaterial labour[i] to claim that we can analyse platforms by viewing data as a raw material extracted from service users. Despite this allusion to Marxist analysis, there are points in the book where the analysis seems to rely on a conventional economic framing regarding the problems of marginal utility faced by these firms. The discussion then moves to a description of the characteristics of platforms in general, specifically how they stand in relation to monopolising the acquisition of this ‘resource’ and tailoring their services to ever-increase this monopolising tendency such that all user activity is captured. By this reasoning, the strategy of applications such as Uber, for example, is to aim to acquire all records of requests for transportation and their fulfillment in all geographic spaces. In becoming ubiquitous, this service drives out any and all interactions that do not comply with the model.

The presentation of different types of platform; advertising, cloud, industrial, product and lean comprises the remainder of the chapter, and offers some interesting areas of insight for those undertaking research and analysis in platform activities.

The final chapter of the book, dramatically entitled ‘Great Platform Wars’ outlines the structural and strategic activities and tendencies of specific firms in the attempt to capture or acquire more data. This makes a few allusions to the influence of the practices and policies of different nation-states in industries such as manufacturing, including China’s overproduction of steel, and hints at the way in which the behaviour of platform enterprises may perhaps be understood in an American search for continued strategic economic power. Unfortunately this line of discussion is not much pursued by the book, and although it offers a tantalising glimpse of what areas of research may be possible through a focus on the dynamics of platform based enterprises, readers may have to undertake their own further research to get a more satisfying picture.


[i] in general, the analysis presented by Srnick (as in publications by Langley and Leyshon and others) focus on material economic relations and have little to say about the contribution of labour other than as a free source of data generation or the means by which algorithms are developed for it’s organisation. For a more in depth discussion regarding the question of labour’s contribution see Toms 2008, Beverungen, Bohm & Land 2015, and Pitts 2016.

Anti-Slavery – in all its forms

Today is Anti-Slavery day. You, like my students, may have thought that slavery was a thing of the past, but over 13,000 people are estimated to be in modern slavery in the UK today. Slavery was considered acceptable in historical periods for multiple reasons, one significant reason being the concept that there were categories of human being who did not deserve the full freedom from domination enjoyed by the elite. This very concept is rearing its ugly head again today, as modern slavery incarcerates people not only through physical imprisonment and retention of identity documents, but also through perpetuating fear and ignorance among its victims.

The Declaration of Human Rights prohibits slavery not only in protection of the individual, but also in recognition of the universal status of all human beings as persons worthy of the chance to determine their own lives, pursue their own work and family lives and strive to overcome the difficulties we all face. It is so often in an attempt to escape from threats to these goals that people become entrapped in debt and forced labour. And in this we should recognise that their troubles are not dissimilar to those any of us might face, despite the stigma we have traditionally associated with slavery. Slavery is no longer something confined to illegal spaces of drugs and prostitution, but now a large number of seemingly legitimate service businesses (from nail bars to car washes) are engaged in using forced labour. These businesses have survived on the very fringes of profitability for a long time, but the dignity of those who labour legitimately is being tarnished by this illegal practice and its unfortunate victims.

In the United Kingdom we are often proud of our industrial heritage, our scenic country homes open to the public and our historic town centres. Yet so much of this innovation, industry and architecture relied upon the sufferring of others forced to work until death in order to support our past Imperial ambitions. While we may like to recall only the historic rejection of the slave trade, our ancestors benefited directly from it for multiple generations. In respect of this fact, we ought to strive all the harder to prevent the flourishing of its modern incarnation.

You can find out more about modern slavery here.

Work, dirt and stigma

Dirty work is not only that work which is grubby or unpleasant, but also that which carries a stigma. We, as human beings have many rituals of order, from marking different periods of life as spaces apart from each other (such as the transition from child to adult), to keeping the vegetables and meat on separate shelves in the fridge. Certain topics and substances have been identified as having a sort of universal stigma or taboo, in that societies and cultures from many different times and places seem to manage them carefully; notably substances such as blood have this significance. Yet the strange part is that even when people who work with blood and encounter it everyday have been cleaned, sanitised and removed from their place of occupation, they often encounter behaviour based on the persistence of that ‘taint’. This concept is the basis of Goffman’s (1963) theory of stigma, which relies upon the idea that we often hold an idea of a person in our heads which is different from the qualities of the person in front of us. For Goffman, this is ‘virtual’ versus ‘actual’ identity and the gap between these two sets of characteristics is stigma.

When we think about work, then, it’s pretty clear that some types of work carry a polluted ‘virtual’ social identity, an identity tainted by association with the substance or status of work. Dr Hamilton has conducted a number of studies on work undertaken with animals, much of which involves contact with ‘dirt’. In her recent study with Professor McCabe, she looks at contemporary meat production, as compared with our expectations set out by the classic studies such as Ackroyd and Crowdy’s study of slaughterhouse workers. She pointed out that even within these industries, there are clear hierarchies, and some work might still be considered ‘dirty’, while other work is carefully distinguished as ‘above’ such pollutants. It is also the case that some workers might be simultaneously repelled and drawn to, dirty work.

This suddenly reminded me of a job I worked in prior to my career as an academic. As a customer account manager for a national company, I worked in a very clean and tidy office complex on an industrial estate. I spent hours on the telephone managing the relationship of the organization with our key customers, trying to ensure that we always met our contractual obligations and kept their business. However, this company was a waste disposal firm which had diversified from office cleaning and sanitary waste, to all kinds of specialist waste regulated by special environmental legislation as well as pest control (a function they had acquired through a corporate takeover). Our employees would visit client’s premises regularly to collect their waste and transport it to our disposal centres, which were distributed at key locations across the country.

This was a particular problem for some of our remote customers based in the rural countryside. For some sites, a waste collection van would have to drive for five hours to make the round-trip to collect the waste. If the building was locked or access by van prevented due to roadworks, the client would often complain to me by phone that the waste had not been removed, and my role was to liaise with the manager of our disposal centre to arrange a staff member to visit the site again. These repeat visits would often involve convincing staff to work unpaid overtime, to travel to sites where the waste might very well be overflowing the containers so visiting these sites could involve a long trip in a pungent van.

This work may well have been stigmatised by it’s contact with pollutants, from bins full of nappies or sanitary towels, through to used needles collected from tattoo parlours, hospitals or rehabilitation centres. But the contact with the ‘dirt’ of the job didn’t change in essence when workers were asked to work overtime – the difference lay in the fact that extra hours often didn’t result in extra pay.

This work was often rejected by employees. The managers of the disposal centres also often rejected the request for secondary visits, so my work largely involved persuasion and cajoling of these workers on the one hand, while also convincing our customers to keep their accounts with us. This work did not involve contact with pollutants, and as such bore little obvious stigma. Yet this work, having contact with the aggressive emotions of customers and the defensive attitudes of managers carries its own ‘taint’ – such emotion work is usually the undervalued preserve of women (see previous post).
This anecdote highlights the sort of hierarchies and distinctions in an organization that focuses on an industry classically ‘tainted’ as dirty work. Can the hierarchy ameliorate the stigma? Do you think that my work as a customer account manager was stigmatised by the industry we worked within? Plenty of food for thought here.

Doing Difference

Gah! Talk about technical hiccups. Sorry readers – for some reason my transfer of posts from the other blog to this one automatically did not go as planned. But the good news is, this means there are many new posts to come! This one is based on a lecture about gender difference given to students on my module Contemporary Issues in Management.


 

We often think about difference as something natural. ‘We’re all different from each other,’ we like to think, ‘everyone is special in some way.’ Yet we rarely think about how collective (and individual) difference is something that is a careful production of regular maintenance work and activity. Women are often more aware of this than men, because there are certain unspoken rules we are explicitly taught by each other about ‘correct’ or ‘good’ appearances – (unintentionally) smudged eyeliner or mascara is a definite faux pas, and not wearing makeup is itself sometimes a political statement.

But this work of emphasising similarity or difference is not only evident in women’s makeup, but also in a wide number of other everyday actions and activities. Standing in a queue quietly, with upright stance and facing forward, for instance, signifies something different to standing in a queue while slouching at an angle and talking to a friend nearby. The second type of behaviour conveys the message; ‘I’m in this queue, but it’s not terribly important to me. I can get along just fine without keeping my place in it or getting to the front as quickly as possible’. The second type of behaviour might be seen as a mark of status, or of aggression and control. It may indicate the person’s confidence of being able to get the item or service they are queuing for by other means, or at another time.

Although a contemporary issue, you might be surprised to find out that the notion of the performance of difference (in studies of gender, at least), was widely popularised in an article by West and Zimmerman (1987) entitled Doing Gender‘. If you follow this link and scroll down you will see the large number of articles this publication influenced, which include a large variety of topics on business and organization as well as sociology in general.

At the time of West and Zimmerman’s (1987) writing, there was already a clear divide in the study of gender which distinguished between ‘natural difference’ or sex, and ‘constructed difference’ or gender. The subtleties of this distinction for the study of gender were hotly debated at the time and continue to be discussed as key principles of the study of gender. West and Zimmerman argued that we can think about gender as a performed accomplishment, an outcome of continuous ongoing work and performance of everyday activities in ways which align with (and reinforce) expectations about ‘masculinity’ or ‘femininity’. But what happens when some of these characteristics are more valued than others? Or when performing activity in a certain way is a job requirement? Based on research in Manchester, Dr Darren Nixon explored how the huge shift in the UK economy towards service sector work, which often requires subservient (‘feminized’) behaviour, disadvantages working class men looking for work, as throughout their lives they have developed everyday patterns of behaviour based on masculine expectations which are not compatible with this type of work. Having learned to be brash, confident in their skills, aggressively independent and plainspoken, work in department stores and perfume counters simply does not ‘fit’.

This approach is important when you think about how frequently most research is interested only in the business case for diversity in organizations. The ‘business case’ approach often assumes that our identities are fixed by our own decisions, a result of choices freely made throughout our lifetime. What the performative approach emphasises is that many of these decisions might have slipped by unnoticed in our everyday practices of getting by in the workplace and fitting in. As such, small things such as an organizational dress code, or recruitment policies looking for the ‘proper look’ for an organization, neglect to realise that these practices are learned and performed through association with certain communities. It also attempts to rationalise people’s complex lives and connections to each other as the choices of individual ’employment applicants’, thereby justifying ongoing practices of exclusion or even harassment.

When thinking about your own expectations in gendered roles, you might want to consider the sorts of things you might list as measures of ‘appropriate behaviour’ among your own group of friends or acquaintances, and how those expectations might change for people who were work colleagues. Consider what you might consider a challenge to your identity practices. You might find this discussion of ‘policing’ of appropriate behaviour in an American high school informative. Such behaviour in school might influence what sort of further education or training you might be likely to consider a good prospect. You could also consider what occupations you find least attractive, or even distasteful, and why.

 

 

Magna Carta and the achievement of moral equality

It may have skipped your notice that the magna carta is 800 years old today. On the other hand, Google has doodled it, so you may be sick of hearing about it by now. The amazing thing about the Magna Carta (or ‘Great Charter’, ‘huge paper’, ‘big list’) is that its influence is more in terms of its symbolic relevance than its original content.

There is a marvellous exhibition of the history and impact of the Magna Carta currently at the British Library, and if you get a chance to look at it you will find that many of the provisions laid down in the Magna Carta were very quickly reverted, changed, addressed in a different charter or fundamentally ignored. Yet the symbolic nature of the Magna Carta lies in its existence as evidence of an agreement between unequal parties, as a contract between ruler and ruled.

At Runnymede in June 1215, the then monarch of England, King John (yes, that King John! The one previously at odds with a certain be-stockinged outlaw based in Nottinghamshire) was in a difficult spot. The many wars between his brothers and father had contributed to his loss of England’s lands in Northern France, and Richard’s involvement in the Third Crusade had drained the coffers of the Treasury which were further depleted by John’s attempts to reclaim lands in France. The ‘taxpayer’, that is, the English barons (particularly in the North), were not particularly pleased with John’s failures and his constant demands for money. Neither did he have the support of the church, having fallen out with the Pope some years previously and had barely returned to his good graces when the English barons marched on London.  The conditions of the Magna Carta were forced upon John, who promptly ignored them, perpetuating civil war in England until his death (probably from dysentry) in 1216.

Now, I’m not a historian, so why is this contextual idiosyncracy around the rule of a despotic monarch of interest? The Magna Carta is held as a precious moment in history by the Law profession, as it symbolises a key moment when the highest secular power (the monarch) is held to account by (secular) law. This raising of the importance of the secular law is perhaps why the Magna Carta was immediately challenged by the Church. The very principle of the administration of justice in this world rather than in an afterlife is encoded in the document.

The actual contents of the original document, however, were subject to substantial revisions with few unaltered articles remaining in the revised versions and the most substantial effects at the time pertained to rights of common people to the access and use of Royal Forest land to gather firewood and pasture animals. The majority of these rights were protected in a later separate charter (the Forest Charter) by Henry III.

However, the majority of discussions on the impact of the Magna Carta refer to the principles of government rather than those of land use and ownership. Social reform movements appropriated the symbol of the Magna Carta to challenge the authority of Imperial rulers (hence the significance of the Magna Carta in the USA), as well as the legitimacy of the rule of the propertied classes. The Magna Carta is therefore identified as influential in the suffragete movement, the civil rights movement and the human rights movement. And so it is the symbolic nature of the event, rather than the sealed vellum parchment, which highlights the Magna Carta as the start of a recognition of universal moral equality.

Nonetheless, it is important to reconsider the content of the original and revised charters. For while the principles of moral equality are enshrined in contemporary legal documents such as the Declaration of Human Rights, these documents focus primarily on civil liberties, the right to liberty and freedom from degrading treatment. The later articles are rarely discussed, such as the right to desirable work and an adequate living standard. In the original concerns of the Magna Carta, even the privileged English barons challenge the authority of an absent monarch to determine that people be unable to heat their homes or feed their families. Moral equality is more than an abstract principle, and considering the vast inequalities growing in our contemporary societies, the Magna Carta is perhaps a better touchstone than some for reconsidering how that ought to be implemented today.

The exhibition on the Magna Carta in the British Library continues until 1st September 2015

The British Library website also has loads of amazing information about the charter

Technology, resources and learning

The role of technology in the learning process is something to which many hours of management consultancy time have been devoted in recent history. It is also something which academic institutions, sceptical at first, have increasingly embraced with open arms; as we are tweeted, podcast, prezi’d and interactive whiteboarded into a more enlightened future. Technology has nonetheless been a subject of study broadly within sociology for a number of years. Such studies have emphasised that the tendency to identify technology as an independent ‘actor’ outside of social relations is misleading at best (see Grint and Woolgar 1997). Rather, technology can be understood as an extension of the roles humans might perform, as a heavy weight on a hotel key ‘stands in’ for a door steward, or a ‘sleeping policeman’ prevents speeding by standing in for the real thing. Technologies are also not necessarily limited to artefacts. Winner (1986) argued that technology could be defined in three aspects; as apparatus, technique or organization, though admittedly it is difficult to distinguish these in practice. This is one expression of a broader philosophical argument which defines technology as an extension of human capabilities in both an abstract and practical sense (see Rothenberg 1993; Brey 2000), thus a bicycle extends our capacities for swift movement or a calculator extends our (individual) capacities for arithmetic. Based on this understanding of technology, many objects in common use for teaching and learning activities fulfil the definition. Some examples might be;

  • improved memory (often through artefacts of data/information storage and categorisation such as books or databases)
  • faster and more reliable calculation (through mathematical techniques algorithms and the encoding of these techniques into everyday items such as calculators or computers)
  • consistent, comprehensible methods of communication (the organization of the written word, in all its forms, represents one of the most significant technologies of our society, and is often manifest in software to support word processing, spell checking and so on)
  • extended voice (projection of messages across space and/or time through recording, telephony, radio, translation software et cetera)
  • extended social reach (adaptation of material for people of different languages or abilities, use of online networks to circulate material more widely or to collaboarate in virtual ‘classrooms’)

Such examples suggest that technology as an extension of human capacities offers powerful potential to improve the efficiency of the learning experience. Yet the technological objects only offer potential; capacities have to be realised. Unfamiliar technologies may be as great an impediment to the development of learning as no technologies at all where there is no support for accessing and learning how to use the technology (again, see Grint & Woolgar 1997). Artefacts often make no distinction between different types of user or student, and as students transition from routines familiar to sixth form colleges or other educational establishments to the less cohesive structures of diverse university departments they are likely to encounter significant changes in their experiences with different technologies. The learning ‘gap’ for each student with respect to making the best use of the technology, in my experience, will be different.

Generational distinctions are, in addition, often highlighted as a ‘gap’ between the cultures of the ‘out of touch’ academy and the ‘technologically native’ youth. By implication, engagement with up to date technologies by the academy offers a bridge by means of which students may be reached. However, my own (anecdotal) experience suggests this may well be mythical, since I have encountered many academics who are thoroughly competent in the use of the most cutting edge technology as well as regular Luddites among the student population.  In addition, this understanding of technology does not encompass the full range of learning technologies we have at our disposal: although we might often think of PCs and wireless internet access as the main ‘technologies’ used in contemporary learning, the way technology is defined encompasses a wide range of learning resources, including printed books and strategies of organizing learning time. This mythical simplification not only presents technology as a simple apparatus but in addition conceals the work undertaken to learn technique and build organization. Within the myth lies two significant potentials, one being the importance of breaking down barriers between the academy and the ‘real world’ (of students, not the same as the ‘real world’ of the employers/employed or of politics), and the second potential being accessibility of knowledge.

Do students care about learning technology?

This post was influenced by a key question in the national student survey (NSS); which asks students how well their university has provided access to resources. This concern fundamentally relates to access to knowledge. Writing, as the long lasting record and encoding of knowledge, is one of the most advanced technologies we possess, yet access to written records is becoming ever more complex. This matter concerns issues as diverse as the opening hours and physical book collections of libraries, and status hierarchies in academia informing the choice of subscription-restricted peer reviewed journals. In this way the written artefact becomes embroiled in more complicated networks of access which students have trouble navigating, either through lack of consultation or through lack of training in technique and organization of these materials. The accessibility of digital media is also relevant, as open access podcasts may be freely available, but only to those lucky enough or wealthy enough to have a reliable internet connection at home, since IT facilities on campus may often be full to capacity or in use for teaching. When I consulted with students on the problem of accessible materials, it became evident that the main concern of students was in navigating these complex circumstances, and the most effective and immediate solution for my teaching was to engage with a ‘low’ technology solution, that is, to provide guidance on accessing print books in the library, provide printed copies of notes and easy to print accessible PDF document links for core readings which would not take long to access or could be accessed using alternative electronic devices such as smart phones or tablets. This is not to suggest, however, that a ‘low’ tech solution is always best: to facilitate revision for students on the same module I have found developing a series of flashcards using the online website studyblue.com very useful as these can be easily printed or used via any mobile device.

The term ‘blended learning’ is frequently used to describe the application of technology to programme or session design, particularly the use of online delivery of materials or activities. While the definition of ‘blended learning’ is roving and contested (Oliver & Trigwell 2005), there are some discussions over whether this approach to teaching should also be considered ‘theory’. Considering the theoretical approaches highlighted in my previous entry on learning theory, the addition of technology to the process of learning seems to be one which makes no intrinsic assumptions regarding the learning process, though the application of certain technologies may indicate a sympathy with cognitive approaches by broadening student choice regarding the order of content, or with behaviourist approaches where technology is used to monitor assessment outcomes (formative and summative alike). To some degree, the discussion of learning technologies highlight the potential for adaptability, suggesting that the fundamental value of using such technology lies in the potential to accommodate a range of individual learning needs in a diverse cohort such as those posed by disability or diverse learning backgrounds. However, these rely on considerable evaluation of student’s requirements which if not conducted comprehensively may act against student’s interests as they are required to learn not only the content of a given session, but also a way of interacting with the technology. Oliver & Trigwell (2005) highlight that a significant limitation of the approach to ‘blended learning’ lies in an absence of analysis from the perspective of a learner but also that technology does offer the potential for varied experiences. Such varied experiences may contribute towards an enhanced learning experience, or place the student under an additional burden of isolation and estrangement from the rest of the class and the learning material. I feel it is important to recognise that for many students of differing competencies, too much application of ‘learning technology’ may present this risk; impeding their individual learning rather than facilitating it.

Looking at the theoretical confusion surrounding these issues, it seems that while technology may offer significant potential to improve the learning experience, it may also serve to confuse students and suffer from in-built prejudice regarding the access to knowledge. Consequently, I believe the implementation of new learning technologies should take account of student’s needs in a comprehensive way and they should not suffer the consequences of adoption before analysis.

The importance of feedback: Where and When students learn (to learn)

A reflection on a ‘critical incident’: an occurrence during teaching that encouraged me to reflect on my practice and assumptions.

Should the classroom be the ‘learning’ environment, or should ‘learning’ occur elsewhere? I have always felt that the traditional ‘chalk and talk’ method of lectures assumes the classroom to be the space where students are given a guided tour to the literature on their chosen topic, but that they need to visit those foreign shores themselves in that terrifying time often labelled as ‘independent study’. In a previous post I have suggested that there are particular theoretical answers to the ‘why’ students study (their motivations to learn) that are intrinsically linked to our views on ‘how’ they study. This post focuses on the question of and assumptions about where learning takes place. In many contemporary universities, students are now also being expected to learn ‘virtually’; in times and spaces facilitated by online materials and engagement with technology and social media. Following on from an assumption that all students are digital ‘natives’ already thoroughly engaged in an electronic world, I have a few concerns with the promotion of technological spaces as learning environments, though I will consider this in detail in a future post.

I teach a course in a business school which deals with theoretical areas of sociology; areas which students have little prior knowledge of and often limited awareness of the historical context in which the relevant ideas have been developed and applied. The size of the class varies but would often be considered ‘large’ for a humanities subject. Many of the students are from China, some from south-east Asia, a few European students and a mixture of students come from throughout the UK. A common learning or cultural background is therefore not something to be taken for granted. None have been required to have previous experience of social science subjects in order to enrol on the course, and many are simultaneously studying finance, economics or accounting. These subjects are not wholly quantitative, but that often forms a frequent part of their assessment and skill set in such programmes. Appreciating that the education system often filters students into ‘good at mathematics and science’ versus ‘good at humanities and languages’, I imagine many students of this background enter my compulsory course feeling at a disadvantage. Equally, the majority of students represent the 18-20 year old demographic, and will have recently come from a setting where learning occurs in the classroom, or in set assignments at regulated intervals. The notion of learning according to a timetable is imprinted, draconian fashion, in their institutionalised bodies.

This reflection is based on the following ‘critical incident’: In a large tutorial group (30 students), after going through discussion on that week’s assigned reading, I was explaining the criteria for the written assessment (a conventional academic essay). This assessment was based on the topics we had just been discussing, and students were concerned about the system of grading. I had provided an overview of how the assessment would be graded as a handout in the previous lecture.  One particularly adept student asked “if so much of the grade is based on study skills, why are you teaching us about these concepts and not teaching us to write essays?” I replied that the university supplied various workshops and activities to hone essay writing skills and that this was why I had been informing them about these workshops repeatedly at the beginning of lectures. However, the incident gave me pause to consider; students seemed to expect a ‘classroom model’ of total and complete learning delivery which was simply not the way I had planned to deliver the module, instead expecting and encouraging students to develop these generic skills through independent study.

This seems to suggest a conflict between a more ‘behaviourist’ model of learning expected by students (see Theories of Learning) and an independent or self-directed model which is significantly more ‘constructivist’ in thinking.  One development in ‘constructivist’ approaches to learning attempts to incorporate elements of practice common to the behaviourist model, and is regularly cited; Kolb’s (1983) model of experiential learning.

Kolb’s (1984) model of experiential learning  comprises a cycle of learning activity, whereby students progress between different types or styles of activity to learn from active engagement. On the horizontal axis, the model presents observation and action (very similarly to stimulus and response) while the vertical axis attempts to ‘fill in the black box’ with internal cognitive processes.

Kolb’s learning styles (copied from ruspat.wordpress.com)

This distinction of mental processes harks back to the Greek distinction between techne, or hands-on knowledge, and episteme ‘justified true belief’ (or abstract knowledge). This approach to learning further suggests that learners have particular preferences for different stages in this cycle, and so may be at their most effective in different environments. However, the entire cycle is the aim. The role of the teacher and the pupil, according to this pluralist approach, is more complex. The teacher is responsible for ensuring as many of the stages as possible are represented and facilitated through learning activities under their control, but it is the student’s responsibility to actually go through the process. While this process is likely to incorporate many aspects outside the teacher’s control (in particular, concrete experience), and even in the case of higher education outside the sphere of the degree programme, it is implicit in these theoretical approaches that student engagement will follow from appropriate course and activity design.

Following on from Kolb’s theory to return to my course design, I had expected that reflective observation and abstract conceptualisation were the tasks for the classroom, whereas concrete experience or active experience in social and business problems (as the topic of the course) and in reading and writing practices (as the medium of learning) were outside of my control. Although I was certain that work and life experience would be of benefit to students in better appreciating the content of the course, I was not sure if attempting to embed teaching activities on how to write into the module would in fact benefit students, or if my role was to attempt to communicate more strongly the extent to which they have to independently engage with the process of developing their writing skills.

Allan and Clarke (2007) discuss the struggles of designing programmes with embedded teaching of study skills over those without. In their research, they highlight a distinction between ‘generic’, ‘subject-related’, and ‘metacognitive’ skills.

Generic skills are those such as effective communication through presentation and/or writing, using information technology and working with others. The authors found that for some students, formal teaching of these skills could build confidence and improve expertise. Other students did not successfully engage with the activities, for various reasons all relating to the perception of the training as lacking relevance.

Subject-related skills are those which are directly related to the learning activities and assessments specific to the subject programme or module. So for this module that may include reading and comprehension skills (especially evaluating the meaning of the author compared to the student’s interpretation), essay planning and writing skills, notetaking, or producing answers under exam conditions. In Allan & Clarke’s (2007) study, for some students these were considered irrelevant or in some cases particular matters which were relevant to them were not available in sufficient depth.

Metacognitive skills are those which relate to the student’s awareness of their own performance and areas of improvement. In this sense they are directly related to assessment and feedback. Encouraging students to develop these skills related specifically to developing a more reflective awareness and personal development. Students on the whole engaged with activities related to this aspect of teaching.

Allan & Clarke (2007) advocate that following from their research, attempts should be made to embed the teaching of study-related and metacognitive skills within subject teaching. They further imply that this requires commitment from several lecturers across entire degree programmes. However, they also indicate that further research is required to identify if this is effective for students.

Considering the issue further:

Following this reflection, I have made strong attempts to incorporate some study-related skills into the course, although I have made a less concerted effort to explicitly address metacognitive skills. In order to develop reading and comprehension skills, I have provided more preparatory exercises such as questions for weekly readings which incorporate components from Bloom’s taxonomy.  Students are given access to online flashcard datasets which allow them to undertake multiple choice tests on these questions which incentivise their week by week completion and allow them to check their progress. Essay planning and writing skills are promoted through provision of resources and a specific session of relevant activities prior to the essay submission, but these are not motivated though clear (behaviourist) rewards.

Due to the limits on classroom time, few explicit sessions on developing metacognitive skills are included in the course. What the module does do at present is attempt to get students to develop these through implicit demands made of them in the classroom, such as asking students to relate their question answers to their own experience, or to things they may be familiar with from the news or even from popular fiction. There is also a more explicit session at the beginning of the course, prior to any teaching on the content, where metacognitve skills are taught more explicitly. Allan and Clarke (2007) suggest that these might be embedded in subject teaching, but incorporating them in a single module might be counterproductive due to the short timeframe (12 weeks) and repetition on other modules. At this point in time, I feel that a single session in the classroom with the option of further one-to-one discussion on specific assignments does well to support the development of metacognitive skills but without fatiguing students with repetition.

I do have concerns that to attempt to ’embed’ these skills in a module too strongly could result in overburdening the students or with over-assessment. Many of the generic skills are now being introduced as a compulsory part of initial study in a number of universities, but there is (understandably) no corresponding decrease in the expectations for academic content. The question of where and when students learn is also a key concern of applicants, often wondering where their money (from the increased student contribution to fees) is being spent. There is a competitive view on contact hours among applicants (and their parents) which seems to demand more classroom time, and which implies more responsibility for learning outcomes being attributed to the teaching rather than the learning effort. An attempt to placate these demands with unspecified additional time in a room with a tutor or some new nifty social networking learning platform without thorough consideration of where and when students learn metacognitive skills as well as content seems fraught with peril for any university.

The blog is dead, long live the blog…

After several months without a post, I have finally accepted the inevitable, that I simply lack the discipline to commit to a regular blog on a single topic every week. I have therefore decided to resurrect the blog by incorporating more of my writing activity on other topic areas, including reflections on the everyday aspects of academic life and research writing on other topics.

Recently, there has been a rush of interest in the Treasure Trapped LARP documentary and the Scandinavian LARP Panopticorp. I still find these things interesting and will blog about them where possible. This week I have mostly been reading up on social science fieldwork and the production of ethnography. Ethnography, generally speaking, is an attempt to study and portray cultures and sub-cultures. Journalist writing such as Lizzie Stark’s book is one of the areas in which the academic and the popular overlap, and this can be considered a sort of ethnography. My fieldwork reflections on LARP always came from the perspective of being a LARPer first, and social scientist second, so because of that the tales I can (or am willing to tell) are from a more native, and in a sense less ‘scientific’ perspective. However, I did use techniques to try to create a bit of distance between my experience and reflection, and it’s techniques I see role players using all the time (and if you check out the Panopticorp video you will find them there). One technique is to imagine explaining your actions to a very different audience (and people may distinguish between character roles and players here). Another is to try to closely examine the emotions experienced during and after the game, especially reflecting on times when you were just in a good ‘flow’, ‘in the zone’, or ‘effortlessly in character’. Personally, I especially find this an occurrence in horror LARP.

So I have found the Panopticorp video interesting, in particular because the player’s reflections have made me think a bit more about what I take away from a game besides whether it was fun or not. Also, it seems to be common practice in ‘Scandi-LARP’ to have these debriefing sessions both during and after the game. These seem to be really valuable to players and to game organisers, but I also think it’s important to stress the overwhelming preference in UK LARP for action. While I may well write a future post on this at length, some readers might want to look at this blog where the author reflects on the sheer beauty of doing LARP.