Posted by: Tatjana | January 5, 2012

Activities and digital technology

Activities and digital technology

Essay

Tallinn 2012

INTRODUCTION


‘What are your orders, Messire?’
Fagott asked the masked man.
‘Well, now,’ the latter replied pensively,
‘they’re people like any other people…’

Mikhail Bulgakov, The Master and Margarita

I am starting this essay citing Bulgakov’s masterpiece, because the way how Messire is talking about people reminds me of how mankind depends on its own kind. People like any other people. This statement can be related to activity theory, which considers humans as a part of their cultural background. Every human since being born adopts different methods of performing activities. These methods are often based on experience of previous generations. However, some individuals create their own means and tools which will influence people in future. All in all, the results of people’s activities strongly affect people’s consciousness and shape a certain perception of reality. Nowadays these are very actual notions, because the whole digital world is based on activities which reform our world-view every day. Technologies allow things which were hardly predicted in a previous century. This process cannot last neutrally, it affects people and their way of thinking. Digital technology does not only provide tools for sufficient problems solving, it changes people’s attitude to problems as such. All these issues will be discussed in this essay.

1. Basic principles of activity theory

Activity theory is rather complicated theoretical framework, which takes some time to be understood by a beginner. However, after deeper analysis, it appears to be very dynamic and useful for studying people’s behavior in different disciplines, including human-computer interaction.

The concepts of activity theory were developed in 1920-1930 by soviet scientists Alexei Leontiev, Alexander Luria (psychological approach) and Sergei Rubinshtein (neurophysiological approach), who based their works on the previous studies of Lev Vygotsky. In the beginning, activity theory was seen as a narrow branch of behaviorism, but Leontiev extended Vygotsky’s research framework in many new ways. He proved, for example, that animals have an active relation to reality, and that all human processes have three different levels of analysis. His results helped to get a bigger picture of human relations to the objects.

Activity theory provoked interest of western researchers from such intellectual developments as cognitive science, American pragmatism, constructivism, and actor-network theory. To the international audience activity theory was introduced in the late of 1970s and early 1980s through the translated publications of Leontiev. Scandinavian activity theory, developed by Yrjö Engeström, united soviet and western approaches.

Prior to analyzing human activities in digitally mediated settings I would like to make a sketch of the main principles of activity theory as I see them, mostly based on Leontiev’s studies.

Activity is a form of interaction, during which animals and humans purposely affect surrounding objects in order to satisfy own needs. Mental reality, which serves any interaction process, starts to form from the earliest age of animals and humans. Mental reality is a way of observing environment and forming images of situations which should be helpful when choosing the right behavior according to established tasks. In other words, we start getting knowledge about different situations since being born and then learn to apply this knowledge for achieving various goals. The main difference between animals and humans is that animals can focus only on external, directly perceived aspects of environment, while humans’ activity (due to collective labor and intelligence) is based on symbolic forms of objects relation.

According to Leontiev, there are three components of activity:

  • Motives that drive activity
  • Goals that are associated with expected results of activity and can be achieved by certain actions
  • Operations that serve as means for the achievement of goals

Actions are processes of interaction with any object. These processes are characterized by having a previously established goal. There are main components of actions:

  • Making a decision
  • Realization
  • Correction and control

When making a decision, people link together an image of situation and a way of acting. Realization, correction and control happen in cycles. Different means and tools, which were either adopted or individually discovered, are used at these stages.

Operation is a unit of humans’ activity correlated with a goal and conditions for its achievement. Operations, which help people to achieve goals, are the results of adaptation of socially generated actions.

One or another activity may play a crucial role in psychological formation during the development of an organism. This, finally, brings us to the base thesis of activity theory: “It is not consciousness that determines the activity, but the activity that determines consciousness”.

2. Activity theory and human-computer interaction

Nowadays digital technology is focused on user-centered design, the user experience, usability, usefulness, and user empowerment (Norman 2002; Norman 2004; Cooper, Reimann R, and Cronin 2007). Developers turned from waterfall model of programming, which required coding, designing, testing and presenting being done separately at each own stage, to new ways of producing software. One of these ways is communicating with users as much as possible; create personas of potential customers, write user stories, and so on. Human psychology started playing an essential role in digital world. Therefore a huge interest in activity theory arose among the developers of digital interactive software.

How can we use activity theory in practice? Do we actually need activity theory in interaction design? My answer is yes, and here are the reasons. First of all, there is no stable theory applied to interaction design, and that turns developing into guessing work. For example, there is a gap between the intentions of designers and the intentions of users (Kaptelinin & Nardi, 2006, p. 12). Secondly, two approaches used to explain human-computer interaction processes do not give full answers to all arising questions. These approaches are cognitive approach and ethnomethodology, and both of them have weak points. Cognitivists are too much concentrated on algorithms, while people are usually improvisatory and do not follow algorithms as computer programs do. Ethnometodology, by contrast, provides various practices and explanations, which are quite flexible but cannot be generalized.

So, activity theory can be a theory that unites previous approaches, identifies important concepts, suggests mechanisms to explain a certain phenomena and generates solutions to problems of interaction design. Let now see how principles of activity theory and human-computer interaction correspond to each other.

According to Victor Kaptelini and Bonnie A. Nardi writings, principles of activity theory can be used to reconsider some of the most central concepts of traditional human-computer interaction, such as transparency, affordance and direct manipulation.

Transparency has traditionally been considered a key aspect of user interface quality (Kaptelinin & Nardi, 2006, p. 79). For example, the followers of ubiquitous computing ideas claim that technology should be invisible, and infrastructures should be seamless (Geneive, Dourish 2006: 10). The founder of ubicomp Mark Weiser wanted technologies disappear into the background. He saw them vanished in the same way as electric motors in a single machine (Weiser, 1991). Transparent interaction is an interaction in which the user can focus on his work, while the system remains “invisible”. However, the recent researches of ubicomp show that the idea of transparent and seamless technologies did not work out. It happened mostly because digital tools became too personal to just disappear. This paradox may be explained by activity theory. The reason is that individuals are concentrated not only on the result (goal) but also on the ways of achieving it (tools). So, they are pretty much aware of their actions, but routine operations are carried out automatically without interfering with conscious processes. In other words, transparency can be reached not by hiding tools but by skills automatization. Skills can be automatized when a user does the same actions several times and finally remembers all of them. Developers are aware of the importance of analogical actions. For example, every new Apple application should have the same logic as the others existing before, otherwise the user would need to learn over and over again every time he opens an app. This would make completing tasks and reaching goals very difficult and time consuming.

Affordances are the possibilities for action provided by the environment. Generally speaking, affordance is what environment gives to humans or animals. In human-computer interaction affordances may be interpreted as a number of features which allow using them in a certain way. For example, an iPad screen affords touching or swiping; keypad on laptop allows pressing and so on. Affordances are also interpreted in terms of low-level manipulation with physical artifact. (Kaptelinin & Nardi, 2006, p. 81). As far as activity theory is based on interaction between individuals and objects, it matches ideas of affordances. However, it does not agree that affordances should be static. Activity theory stands for constantly changing environment, that is why its notion of affordances needs to be extended to human activity as a whole, not just the level of static operations.

Direct manipulation has been a fundamental concept of human-computer interaction until it was challenged by activity theory. Developers thought that individuals manipulate objects directly, without concentrating on instruments. However, a recent activity theory-driven analysis of human interface revealed that people seldom operate on their objects of interest in a direct way. On the contrary, people highly interact with instruments before manipulating an object. For example, when scrolling a document they operate with a scroll bar and, as a result, with a document (Kaptelinin & Nardi, 2006, p. 83).

These were just a few examples of how activity theory principles change the standpoint of human-computer interaction. It teaches that our actions are interrelated and that they can influence our further decisions. That is why every new piece of software has to be created in relation to existing concepts, because it makes interaction process much easier. In addition, developers should pay more attention to designing tools, because people love using tools, especially when they are attractive and handy. Affordances are not static; they can be modified both by environment and by people.

3. Is digital technology neutral?

The notions of activity theory can prove that digital technologies are not just instruments for the achievement goals. Better to say, they intended to serve as such but exceed the expectations. The vision of technologies as mere tools was shared among ubiquitous computing developers, who imagined a world full of hidden devices, which helped people to live a better lifestyle and do daily tasks faster. However, nowadays digital devices became too visible and too personal that they cannot be considered as simple tools anymore.

This is mostly an issue of design and marketing. Many centuries ago instruments were not necessarily beautiful. There was a clear line between art and craft. A cup served as a container for water, hammer was an instrument for nailing, spoon was there to eat etc. Nowadays there are plenty of brands and models; manufacturers are worried of selling their items. The design is the main thing (in addition to functionality and quality) which differentiates products one from another. There is so much attention paid to visual form, that even simple tools, intended to help with a single task, look like a piece of art. So do digital instruments.

The consequence of overwhelming design is changing people’s attitude to their tools. I do not consider my smart phone as a device for making calls and check e-mails. I love it as an object and I do not always interact with it to complete a certain task. Sometimes I literally enjoy scrolling, touching and swiping. So, digital technology is not neutral, it definitely affects my consciousness. I am not sure if it is autonomous or not, but definitely far more than mere tool and instrument.

As an example, let discuss digital technology as a way of offloading our mind to external resources, such as search engines, digital dictionaries, digital notes, calendar entries and so on. In the beginning, these tools were created to simplify the process of achieving concrete goals. Nowadays people rely so much on Google, that I can hardly imagine modern life without it. This is how performing activities transforms our consciousness. Therefore, search engines are not neutral anymore.

It seems like people are becoming less concerned about developing their internal cognitive resources. They rely on various mind extending devices surrounding them (the search engines are simply the most illustrative example). Even the educational system is changing. Teachers revise methods that are no longer suitable for students, who do not look through the piles of encyclopedias anymore but spend a few minutes for Web surfing instead.

Now we have a situation when certain activities change the whole environment. This approach is closer to cognitive science than activity theory. It is evident that human tools have changed throughout the centuries: from stick to ink and finally to keyboard and laptop. Google is a product of necessity, “an evolutionally produced device got from people who do not simply live their environment, but actively shape and change it” (Bardone 2010, p. 63).

Moreover, our digital activities also shape other daily activities. As a result, we start doing many unnecessary things which could not be doing without technology existence.

I often find myself doing completely useless things via Google, for example, searching for celebrities and their biographies, checking word spelling (even if I am sure how to spell), looking for song lyrics etc. After I get the answers I feel rather lazy than satisfied because the information was not obtained in a hard way. I just did a few movements.

The easier is access, the less effort our brain needs to complete the task. As a result, we become lazier (both mentally and physically), which is not the best statement of human organism. Perhaps our brain is in the same danger as our body. Some researchers claim: “Physical technology altered the frequency, intensity and manner of our muscle use, altering our muscular development (even introducing new ‘technological diseases’, such as carpal tunnel syndrome). Cognitive technology will do likewise, but instead of affecting our muscles it will affect our brain development, organization and capacities. Changing how we think, learn and communicate, our cognitive tools are reshaping our minds” (Dror, Harridan, 2008, p. 21).

We become lazier also because there is no need to learn and memorize new things if our cognition is distributed anywhere, no matter whether our brain still controls it or not. So to say, due to offloading parts of information onto digital technology, we reshape our organisms. That basically means that activity allowed by technology effects humans’ behavior and mental state. And this is one more illustration of activity theory

4. Redesigning activity

In this chapter I want to turn to the daily activities which can be reorganized with the help of emerging digital technology. There have been many interesting suggestions offered by my colleagues. For example, offline reading, proposed by Valeria Gasik, can be improved with better illumination, better content and better concentration. In her opinion, this can be done by going beyond the technologies. For example, if you are tired of artificial illumination, make a break. If there is lack of concentration, stop multitasking. If you do not like the content, change the genre. Everything dos not entirely depend on a computer.

Kristo Vaher suggests changing the strategy of e-learning. By now, online courses include a huge number of papers for independent reading, video conversations and chat rooms. Each of this method might be useful for some concrete purposes but not for e-learning as a whole thing. Vaher proposes making online studying more interactive and closer to natural. The main point is to produce a clear feedback from students to teacher, which may highlight the main misunderstandings and propose questions.

Mehrnoosh Vahdat speaks about lack of concentration because of multi-tasking. She claims that everything she does with digital technologies can be done in the same successful way without them. It means that tools, which once were intended to help people, do not help them anymore. Moreover, they produce new problems to be solved, such as how to pull oneself out of Facebook and force writing an essay.

I am not as pessimistic as Mehrnoosh, but I also noticed that many daily activities should be redesigned within digital technologies, because the way it all works today does not make solving tasks much easier.

Let discuss one of the most essential human activities – finding something to eat. The question “what to eat?” is arising several times a day. The fastest solution is to eat somewhere out, but those who prefer home-made food usually think of a recipe and necessary products to buy. I love cooking and I have thousands of recipes stored in books, brochures, magazines, written on pieces of paper and saved in the bookmarks of my browser. One day I realized that it became pretty hard to find my favorite recipe or at least any suitable recipe in such a huge mess.

There are plenty of Internet resources concerning cooking: personal blogs, communities, websites, groups in social networks and so on. The main problem is that each source has its own method of storing the recipes. Some websites put step-by-step illustrated instructions; some provide a number of ingredients and a short description. Ingredients can be proposed in different measuring systems: grams, spoons, cups etc. Searching tools are also different. It is possible to search by typing keywords, catalogues surfing, asking questions at forums etc. After all, even if I succeed in finding the recipe, how can I take laptop to the kitchen and always keep it in front of my eyes? All the same, I have to write down the recipe on a piece of paper and then take it with me and glue somewhere on the wall. So, creating a personal database with all the recipes turns out to be a difficult thing.

Let us classify the main problems once again and point them out here:

  • Hard to find an appropriate and trustful source of the recipe (blog, community, website, group, forum).
  • Hard to search for the right recipe (keywords, catalogues)
  • Hard to follow the instructions (different measuring systems)
  • Inability to take laptop to the kitchen

Using Valeria Gasik’s method of converting problems into goals, we have this number of challenges:

  • Better orientation among recipe sources
  • Better searching tools
  • Unique instructions
  • Possibility to follow the instructions while cooking

Better orientation among recipe sources

This can be done by creating a single database which may include links to different sources. This will prevent from repeating the same recipes. When joining the database it source has to follow simple instructions, such as keeping the same style of naming recipes, providing illustrations or video instructions etc. There also should be a motivation for different communities to provide their information to the database. This motivation can be getting a much larger audience, an ability to vote for the recipes, choosing the best recipe or the best instructions of the day and other types of interactive motivators. The user interface of such database may look somewhat like eBay listings.

Better searching tools

Searching for the recipes can be done in both ways, either by typing keywords or by browsing through the catalogue. Browsing system should include various levels of classifications. For example, search by products or by meal types (desserts, soups, drinks etc).

Unique instructions

As different people are used to different measures, there should be provided at least two versions: quantity in grams/kilos and quantity in spoons/cups. The description of fruits and vegetables sizes is also an advantage. For example: one banana of medium size, 5 small cherry tomatoes etc.

Possibility to follow the instructions while cooking

It is dangerous to take laptop to the kitchen because it may get wet or dirty. On the other hand, switching between laptop and kitchen is not the best solution either. I suggest the audio recipes that can be heard from the kitchen easily. In addition, if a user has a speech recognition function, he or she may ask to repeat instructions or ingredients.

iCooker or facing the future

The methods I offered to improve our daily cooking activity are not perfect. In my view, there is still not enough of corresponding and affordable technologies to make preparing food smooth and easy. The ideal picture for me is having a special device, let me call it iCooker, with a scanning feature, display, balance and speakers. It can be easily placed in the kitchen and improve cooking activity.

How does it work? Imagine you have a number of products at home: cheese, milk, butter, tomatoes, potatoes and garlic. Can you cook them together? If imagination is not enough to pick the right recipe iCooker will help.

First of all iCooker scans your products. You put items one by one in front of the green laser and the device recognizes what type of product it is. When all items are scanned, you make preferences and give orders. For example, ask device to find recipes that include only products available; or request other recipes which include 2-3 additional products (maybe you have them but did not notice at first).

Recipes appear on the screen. They can be classified by popularity, easiness or time required for cooking. You may also choose the number of portions needed. iCooker identifies not only the type of a product but also its size and weight, as it can be connected to a small electronic balance via Bluetooth.

After preferences were chosen, iCooker gives you general instructions and then shows pictures of every stage. There is an audio option available as well. You can also “talk” with iCooker asking him to repeat any stage.

As for protection issues, iCooker can be easily installed in your kitchen, it is not afraid of water or oil. It can be cleaned and replaced; works with batteries and has to be charged 2-3 times a week.

In my opinion, this kind of a tool can reshape our activity and also change attitude to cooking in general. If we have a future of ubiquitous computing, there will be plenty of devices as the one described above. However, nowadays only online recipes are available, that is why cooking activity should be somehow improved within the frames of existing means.

CONCLUSION


In this essay the basic principles of activity theory were discussed. Activity theory is a complex theoretical framework which was developed in 1920-1930 by soviet researchers and then recognized in western world owing to Scandinavian developers. It has a common understanding of people activities with other theories, such as cognitive science, American pragmatism, constructivism, and actor-network. Nowadays activity theory provoked high interest because it can be applied in human-computer interaction.

Activity theory reshaped existing concepts of interactive technology, such as transparency, affordances and direct manipulation. The theory showed that people do not treat digital devices as mere tools; and that activity performed with technology is constantly reforming our consciousness.

Therefore digital technology is not neutral anymore. It changes people’s attitude to common things. For example, search engines initially appeared as instruments, but nowadays the process of searching is not just an instrument; it is an activity itself, which is sometimes performed without any clear purpose.

Looking at various digital devices and analyzing their usefulness, we can suggest the new features and new solutions for our daily activities. Relying on their previous experience, people can find the new methods for faster problems solving. An example presented in this essay is iCooker, a digital device used for generating recipes. If there appear more devices like that, human life soon will be wrapped into technologies and one cannot imagine any daily activity without them. But one never knows if people smoothly adopt this lifestyle or just drop it out one day and start acting like before. People like any other people.

REFERENCES


  1. Bardone, E. (2010). Moving the Bonds: Distributing Cognition through Cognitive Niche Construction. In E. Bardone, Seeking Chances: From Biased Rationality to Distributed Cognition. Springer, Berlin, in press.
  2. Cooper A., Reimann R., & Cronin D. (2007). About Face 3: The Essentials of Interaction Design. Wiley Publishing, Ind.
  3. Dror, I.E. & Harridan S. (2008). Offloading cognition onto cognitive technology. In Dror, I.E. & S. Harridan, Cognition Distributed. How cognitive technology extends our minds, John Benjamins Publishing Company, New York
  4. Engeström, Y. The future of activity theory: a rough draft: http://lchc.ucsd.edu/mca/Paper/ISCARkeyEngestrom.pdf
  5. Gasik, V. (2010). IFI7144 Task 13: Re-designing activities. Retrieved from: http://sokerirulla.wordpress.com/tag/ifi7144/
  6. Kaptelinin, V., & Nardi, B. A. (2006). Acting with technology. Activity theory and interaction design. London, The MIT Press.
  7. Norman, Donald A. (2002). The design of everyday things. Basic Books, New York.
  8. Norman, Donald A. (2004). Emotional design: why we love (or hate) things. Basic Books, New York.
  9. Vahdat, M. (2010). Redesigning activities. Retrieved from: https://mehromedia.wordpress.com/2010/12/14/task-13-redesigning-activities/
  10. Vaher, K. (2010). Redesigning and re-instrumentalizing e-learning. Retreived from: http://waher.net/archives/821
  11. Weiser, M. (1991). The Computer for the 21st Century. Scientific American.
  12. Wikipedia. Activity theory: http://en.wikipedia.org/wiki/Activity_theory
  13. Леонтьев, А. Н. (1975). Деятельность, сознание, личность. Москва, Политиздат. Retrieved from: http://intelligence.su/lib/00027.htm
Posted by: Tatjana | December 31, 2011

Final essay / Ethics and Law in New Media/

MOVING UBICOMP FROM THEORY TO PRACTICE

Essay

Tallinn 2011

INTRODUCTION


Ubiquitous computing theory differs from its practical usage. There are several predictions made twenty years ago by the inventors of ubicomp, but the system itself did not succeed in working to its full extent. However predictions should not always be 100 percents true, they just give directions for further development. Nowadays researchers try to combine the initial theory of ubiquitous computing with modern social and technological conditions. Apparently, the era of ubicomp has already come to pass. There are multiple devices which make ubiquitous computing available. But at the same time they are too various to make the system work seamlessly. Therefore ubiquitous computing of the modern age is extremely diverse.


Ruining the expectations

The term Ubiquitous Computing (ubicomp) was firstly introduced by Mark Weiser in Scientific American article in 1991 and seemed to be quite a clear specific notion. Since then, it has gained plenty of definitions and synonyms, and has been actively discussed among the researchers. The main question of their discussion is why ubicomp is not working in a way it was expected to work.

Theory occurred to be different from practice. And there is a good example, regarding this issue, which illustrates the gap between creators and users. One urban legend says that the inventor of the sugar stick went mad and committed suicide after he realized that people didn’t understand how his invention was intended to be used. People were expected to break the stick in the middle, letting the sugar flowing out from both sides of the tube. In fact, everyone simply cut one side of the stick and poured the sugar from the other side.

Fortunately, the researchers who are still working at the birthplace of ubicomp are not going to commit suicide just because the initial system is running differently in practice. For example, James Bo Begole, a PARC Principal Scientist, claims in his blog, that proposing a formal definition of ubiquitous computing will degrade into never-ending semantic or ontological debates. He explains, “We’re not trying to write science fiction here, we’re trying to create systems that help people throughout their life” (Begole, 2011). Personally I believe that practical usage always wins, hence the real ubicomp is the one which works in wild, not in laboratories.

I will discuss these issues in a following paper mostly relying on a book “Ubiquitous Computing Fundamentals” (Krumm, 2010) which includes 11 researches by different authors, and two articles: “Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision” (Bell & Doursih, 2006) and “The Computer for the 21st Century” (Weiser, 1991).

Ubicomp practice against its theory

When I heard the term ubiquitous computing for the first time I was pretty sure it was a new phenomenon of the modern Information society and I did not even realize that the inventors of ubicomp theory were planning to produce something more than just omnipresence of computer technologies. Obviously the point was not only implementing computers everywhere, but doing it in a certain way – seamlessly. According to the article “Yesterday’s tomorrows…” by Geneive Bell and Paul Dourish (2006) the main point of the ubicomp founders was making technology invisible and infrastructures seamless: “The ubicomp world was meant to be clean and orderly” (Geneive, Dourish 2006: 10). However today we have all the predictable devices but they are not working the “right way”, hence Bell and Doursich come to a conclusion: “Rather than being invisible or unobtrusive, ubicomp devices are highly present, visible and branded” (Geneive & Dourish 2006: 10).

It strikes me that some researches want predictions be realized to the full extent. In my view, scientific predictions give us just a hint to further studies and experiments. Therefore Weiser’s conception of ubicomp should be considered as a set of directions and means for the ubicomp development. Discussing Weiser’s predictions and comparing modern results with his expectations does not do any good to the practical side of the question.

Weiser wanted technologies disappear into the background. He saw them vanished in the same way as electric motors in a single machine (Weiser, 1991). However this vision did not come to pass because computer devices once became too personal to just disappear. Moreover, each ubiquitous system means putting our wishes and actions under control, which runs contradictory to the principles of democratic society. If I come home in a bad mood I probably do not want any device saying welcome, do not need TV turned on, I would rather prefer having a good sleep for several hours.

Personal wishes change all the time, but the system is programmed in a certain way. It cannot simply read my mind and choose the right options. If there are more people using the same applications, they probably want those apps working in different ways. The problem is that ubicomp is not a guess-machine; it is well-programmed instrument for making our life easier. But if it starts acting without the owner’s will, mood, permission and other types of control, it becomes too officious and nosy.

Capitalism has been also influencing the ubicomp. Since technology became so popular and more people starter studying ICT, a great amount of different digital solutions has been introduced. Users got their right to choose and became very involved into designing process. A new stage of interactivity design, for instance, is implementing the Goal-Directed and User-Oriented design, which means cooperation with clients, businessmen and potential users from the very beginning of creating any kind of interactive software. As the result, we got plenty of branded products produced in a way to attract more customers. How can they be implemented seamlessly or hidden “behind the walls”? Modern digital devices became the objects of high interest, public demonstration, and boasting. They are too various and personal, hence the society will never rely on the one unique digital system for running their business, studies, household or whatsoever.

Apparently the problem is that technology was put beyond human psychology. It seems to be somebody’s fixed idea to follow Weiser’s predictions and try to hide digital technology completely out of sight. We should first throw some ubicomp features to the society and see what happens next. Only then the further implementation should be launched. People will never start using even the most intelligent system if it bothers or bores them. Sometimes things in practice work differently from that in theory.

Modern researches of ubicomp area do understand that fact and implement user-centered design into their work and study. First they explore user current behavior: What are people doing now? Then they study the proof-of-concept: Does my novel technology function in the real world? And finally use a prototype: How does using my prototype change people’s behavior or allow them to do new things (Bernheim Brush, 2010). Several ubicomp gadgets were built this way. For example, the TeamAwear system, developed by Page and Vande Moere, a novel wearable display system for team sports. These are augmented basketball jerseys worn by players. They display game-related information such as the number of points scored and fouls. Before implementing this device was tested among representative users (Bernheim Brush, 2010).

Another example of user-centered design in ubiquitous computing production is the CareNet display which displays data sensed about an elder’s activities (e.g. medication taken, activity levels) to members of elder’s care network on an ambient display. Researchers used the Wizzard of Oz technique to gather data for the CareNet display by phoning the elders several times a day (Bernheim Brush, 2010). The Wizard of Oz technique is an efficient way to examine user interaction with computers and facilitate rapid interactive development of dialog wording and logic (Green & Wei-Haas, 1985). It enables unimplemented technology to be evaluated by using a human to simulate the response of a system. This technique can be used to test device concepts and techniques and suggested functionality before it is implemented.

In addition, there are more methods of studying user habits, for example, Experience Sampling Methodology (ESM), Goal-Directed design and others. All those methods are meant to better understand people’s logic. Ubicomp can become invisible when it runs one step ahead the user’s will.

Directions-giving predictions

As was mentioned before, scientific predictions should not be considered as axioms or postulates. They just give further researchers more or less clear directions, while the researchers themselves should compare the current state of society with technology availabilities and decide in which way they are connected, and why people refused to use one or another feature in a way meant by scientists.

According to Bell and Dourish (2006), all of us are already living in a ubiquitous computing era. “The challenge, now, is to understand it”, they both claim (Bell & Dourish, 2006). In order to do that I appealed to Ubiquitous Computing Fundamentals book (Krumm, 2010), which has the most useful collection of ubicomp tutorials.

In fact, the examples used in the book prove that Weiser’s theory is not relegated to oblivion, it is working but not to its full extent. In late 1980’s PARC embarked on the design of three devices: Tab, an inch-scale computer that represented a pocket book or wallet (Weiser, 1991); Pad, a foot-scale device, serving the role of a pen-based notebook; and Liveboard, a yard-scale device with the functionalities of a whiteboard. Today there are already several analogous commercial devices that are in wide use (Want, 2010). Smart phones are much like Tabs, modern laptops are very similar to Pads. Large-screen LCD displays with 50 to 60 inch diagonals can technically fit the function of Liveboards. The examples of Singapore and Korea illustrate how the combination of those technologies can lead to social benefits (Bell & Dourish, 2006).

If the ubicomp is really working, why its results do not satisfy several researchers? Bell and Dourish, for instance, see the problem in infrastructures. According to their article (2006), in practice infrastructures are continually visible and remain messy. Yes, they are visible but what if they only started to disappear? Weiser claimed that “disappearance is a

fundamental consequence not of technology but of human psychology” (Weiser, 1991: 1). What if our psychology is not ready yet?

However I cannot agree that infrastructures are messy. They are rather various and diverse than messy. In a book edited by John Krumm eleven experts of ubicomp pretty much succeed in explaining how the system of ubiquitous computing is built. Ubicomp systems aim for a heterogeneous set of devices, including invisible computers embedded in everyday objects such as cars and furniture, mobile devices such as personal digital assistants (PDAs) and smart phones, personal devices such as laptops and so on (Bardam & Friday, 2006). Why all these ubiquitous devices seem messy to Bell and Dourish? In my view, the reason could be that all the devices have different operating systems, networking interfaces, input capabilities and displays. They are simply to different to be united into one system once imagined by Weiser (2006): “Tabs are the smallest components of embodied virtuality. Because they are interconnected, tabs will expand on the usefulness of existing inch-scale computers, such as pocket calculator or pocket organizer”.

In fact, the modern picture of ubicomp shows that different devices can be successfully interconnected but maybe not in a way Weiser wanted them to be. Jakob Bardram and Adrian Friday (2006) describe these connections very good: “Ubicomp systems are composed of distributed, potentially disjoint, and partially connected elements (sensors, mobile devices, people, etc.)”. It is important that they are connected partially, which means that the system is the product of spontaneous exchanges of information when elements come together. Interaction patterns and duration vary with the design and ambition of any given system. Well, this is the modern view on ubicomp – spontaneous and multiform but not messy.

CONCLUSION


Even though the modern technologies make ubiquitous computing available, the whole system cannot work as planned by its inventors. And it should not work this way because social conditions are changing so fast, that people nowadays do not need what they needed twenty years ago.

Practical testing and implementing is the best way to see if the solution produced in a lab really works in wild. Many of ubicomp researches rely on Goal-Directed and User-Oriented design methods to make their devices more needs-responding.

There are several claims that ubicomp infrastructures are messy and require better understanding. Naturally the processes of developing and studying are running side by side and result in new ideas and challenges. By now, ubiquitous computing is rather multiple than messy. It consists of partially connected devices which should be improved. For that purpose, ubicomp area is cooperating with psychology field trying to understand human needs and wishes.

REFERENCES


  1. Bardam, J., & Friday, A. (2010). Ubiquitous Computing Systems. In Krumm J (Ed.), Ubiquitous Computing Fundamentals (pp. 1-35). Boca Raton: CRC Press.
  2. Bell, G., & Dourish, P. (2006). Yesterday’s Tomorrows: Notes on Ubiquitous Computing’s Dominant Vision. Personal Ubiquitous Comput., 11, 133-143.
  3. Berhneim Brush, A. J. (2010). Ubiquitous Computing Field Studies. In Krumm J (Ed.), Ubiquitous Computing Fundamentals (pp. 1-35). Boca Raton: CRC Press.
  4. Bianchi, A. (2010). Sugar, babe! [Also Plants Fly blog]. Retrieved from: http://www.alsoplantsfly.com/2009/08/sugar-babe.html
  5. Bo Begole. (2011, March 2). Defining ubiquitous computing vs. augmented reality. [PARC blog post]. Retrieved from http://blogs.parc.com/blog/2010/03/defining-ubiquitous-computing-vs-augmented-reality/
  6. Green, P., & Wei-Haa, L. (1985). The Wizard of Oz: a tool for rapid development of user interfaces. University of Michigan Transportation Research Institute Ann Arbor. Retrieved from: http://deepblue.lib.umich.edu/bitstream/2027.42/174/2/71952.0001.001.pdf
  7. Want, R. (2010). An Introduction to Ubiquitous Computing. In Krumm J (Ed.), Ubiquitous Computing Fundamentals (pp. 1-35). Boca Raton: CRC Press.
  8. Weiser, M. (1991). The Computer for the 21st Century. Scientific American.

To Do

Analyse both free software and open source approach in your blog. If you prefer one, provide your arguments.


I have to admit that the terms “free software” and “open source” sounded confusing to me. But now that I’ve read lecture notes of Kaido Kikkas, I understand the difference. In addition, two articles of Richard Stallman explain everything clearly enough.

Now I see that:

  • the term “free software” existed first
  • the term “open source software” was created to propose ideas of free software to business area, excluding the confusing (or annoying?) word “free”
  • “free” in a sense of software is closer to the concept of “free speech”, not “free beer”
  • if the word “free” is interpreted in a right way, it has a very deep, philosophical and socially important meaning
  • the term “open source” is too narrow and reveals only some ideas, which were initially planned by its creators. The term says nothing about freedom (but it should!).
  • some software

Regarding all that, I deeply support Richard Stallman’s ideas, but unfortunately they are not very practical. Philosophy is a good thing for mental development. But nowadays it is simplicity which rules the world. The ideas of freedom in software production seem vague and foggy. “Open source” is easier to understand. “You can look at the source code” – that’s it! Oh… it means more than that? Well, most people just don’t care.

Posted by: Tatjana | December 31, 2011

Task 20 – The Digital Enforcement

To Do

Write a short analysis about applicability of copying restrictions – whether you consider them useful, in which cases exceptions should be made etc.


I have never heard about any of my friends or acquaintances who have been prosecuted for violation of copyright restrictions. Even though, almost every of them have downloaded a cracked version of proprietary software at least once. Programs with keygens can be easily found in the Internet. It seems that copyright restrictions are not useful. But not everywhere.

Once in 2006 I visited my pen pale in Germany. She is very fond of music and has a huge collection of CDs. When we started to talk about one song I suggested downloading it from the Internet. My friend was round-eyed. She said they were not allowed to download songs “because the police officers control every computer by IP address, and they will immediately come to your house”. The same rule applied to software as well. I was a bit shocked by her reaction. But now I see that if there are people, who appreciate copy rights and restrictions, the last may be considered useful.

To Do

What could the software licensing landscape look like in 2015? Write a short (blogged) predictive analysis.


Nowadays there is a tendency to develop free and open source software. If it continues the same way, there will be more people integrated into software development process. However, 2015 is not far away. That is too soon to think about entirely open software. And there are still too many people who do not get an idea of F/OSS, thus they simply do not need it.

The best way to predict future is to look back. The situation three years ago was not very different from what we have now. There have been major types of software: proprietary, free/open source and commercial.

When speaking about software licensing I would like to separate IT professionals from common users, because these two groups have different goals for using software. For example, hackers are much more interested in open source software than average users. Driven by curiosity and supporting ideas of freedom, they want to look inside the code. Of course, it is not legal concerning proprietary software (however, there are so many hackers who ignore legal issues), so why not to do it freely with open source software such as GNU/Linux, for instance.

Common users, on the contrary, prefer (and will always do) complete, safe, stable and comprehensive software, which is once installed and cannot be changed. Normally people get scared if a program stops working. They do not know how it was processed, and there is no way they can fix the code. According to this logic, it is safer to obtain software which is protected and cannot be modified. Risks of getting a raw product are minimal. And customers are ready to pay. That’s a common behavior of consumers. They act the same way when traveling. “All included” trips organized by traveling companies seem more attractive (no matter what the price is) than independent adventures by own car, coach or railway. It’s a pity that such people do not even know about the opportunities of self traveling (read: self modified software).

The most noticeable change among common users during the past decade was Microsoft loosing monopoly in software market. More and more common users, which I personally know, started buying MacBooks and working on Apple computer software. This is mostly because of introducing iPhones and iPads, which recovered Apple’s popularity. Apple Inc offers open source software as well, so it might be the step towards our future. However, Apple products are very expensive, so there is another problem: breaking licenses to get product for no price (for example, downloading cracked version of XCode). Software licenses have never been an obstacle to commit violation. Hope that in 2015 the balance between law and social understanding of software property will be more or less reached.

Posted by: Tatjana | December 31, 2011

Topic 18 – The Millennium Bug in the WIPO Model

To Do

Find a good example of the “science business” described above and analyse it as a potential factor in the Digital Divide discussed earlier. Is the proposed connection likely or not? Blog your opinion.


As we already know, there are different aspects of Digital Divide, such as physical access, affordability, age, education etc. But how can intellectual property enlarge the gap between users of digital products?

Well, in academic world, the struggle for copyrights may slow down pure innovation, and therefore cause different speed of scientific work. This may affect the Digital Divide.

Turning science into business has a bad influence on science itself. The value of intellectual property is so high, that researchers can easily confuse earning money with pure scientific process. Instead of being satisfied with a new invention, some scientists care about fees and patents.

This kills an idea of science as a tool for developing Planet. The ideal model is when science serves people, answers their growing needs and, as a result, increases educational level of population.

There was an example of USSR censorship model brought in the concept of K. Kikkas. I will use the same example but in an opposite way. I believe that soviet times were the golden age of science. So much work was done in the fields of astrophysics, geophysics, mineralogy, medicine, genetics, biophysics, nuclear physics, and electronics. In total, there were 400 discoveries done during 35 years. The whole academic work was entirely paid by government; even though each discovery was associated with its author, there was no such thing as financial benefit from your own product. The object pursued by every scientist was not a business value, but social value.

Speaking about censorship, every scientific book had to be read through, corrected and edited several times before publishing. And it was done on a very high level. The editors were professionals not only in editing but also in the field of the book, which they edited. Isn’t it an advantage? As a result, books were highly competent and grammatically correct. Even nowadays Большая советская энциклопедия (Bolshaya sovetskaya entsiklopediya, The Great Soviet Encyclopedia) is one of the largest and most comprehensive encyclopedias in Russia and in the world.

Now look at modern “scientific” editions. Some of them are not even grammatically correct. Others just pretend to be scientific but do not follow any academic restrictions. And look at the prices! Everyone: scientists, editors, publishers work to get benefit, not to educate people and share knowledge with colleagues. Fairly speaking, they follow the need of average readers who are looking for an easy language and simple explanations. I think promoting this kind of scientific literature is harmful for the society. It will certainly cause the gap between really educated academic world and those who prefer commercial scientific editions.

That is why many academics, for example R. Preston McAfee, support open access to academic works. The existence of open source will help to find essential articles aimed to share new knowledge instead of shearing business value.

The new problem is the absence of professional editing within openly published materials. In other words, any student or any professor (no matter how competent they really are) can make publications and present theories without higher control. The solution may be an approval of other academics. Take Wikipedia, as an example. The quality of articles is controlled by other writers, so the collective intelligence does the work of censorship.

To Do

Study the GNU GPL and write a short blog essay about it. You may use the SWOT analysis model (strengths, weaknesses, opportunities, threats).


To Do

Study the Anglo-American and Continental European school of IP. Write a short comparative analysis to your blog (if you have clear preference for one over another, explain that, too).


SCHOOLS OF INTELLECTUAL PROPERTY

Anglo-American

Continental European

Similarities

  • the schools are not contradictory and could be reconciled
  • they protect expressions of ideas, not ideas as such
  • they protect inventions (patent), writings (copyright), trademarks, trade secrets, design and models.
  • the subject matter of intellectual property is largely codified
  • does not map out the entire landscape

Differences

  • “fair use” – economic efficiency, utilitarian perspective
  • copyright serves, and should serve, to maximize social wealth
  • greater potential protection
  • the protection of authors is less extensive
  • permits reverse engineering
  • right of parody is broader
  • impose a double standard favoring large companies generally, and large American companies particularly
  • has a principle that allows anyone to make limited use of another’s copyrighted work for such purposes as criticism, comment, news reporting, teaching, scholarship, and research
  • “moral rights” of authors to the integrity of their person as expressed in their work
  • smaller potential protection
  • the protection of authors is more extensive
  • permits interoperability of programs
  • right of parody is narrower
  • moral rights consist of the right to create and to publish a work in any form desired, the creator’s right to claim the authorship of his work, the right to prevent any deformation, mutilation or other modification thereof, the right to withdraw and destroy the work, the prohibition against excessive criticism, and the prohibition against all other injuries to the creator’s personality

The most sufficient divergence between the two schools of Intellectual Property is their rationales. Anglo-American system protects economic efficiency, while Continental European school protects the personal rights of creators, as distinguished from their economic rights, and is generally known in France as “droits morals” or “moral rights.”

I personally prefer US school of IP because of its utilitarian approach: “copyright is granted because it encourages authors and inventors by rewarding them for their acts of creation”.

Most of all I support these ideas:

  • Fair use exists to remedy market failure
  • New technologies make mass copying inexpensive and represent a potential market failure
  • Fair use consists of a balancing of economic interests

However, this system is not perfect, because it has some conflict points with the political right to freedom of speech: “The first amendment to the US constitution guarantees the freedom of expression. However examples abound wherein US copyright law within the US has limited radical satirical critiques of American society. From a critical perspective, copyright is thus one more agent of maintaining state dominance – but through “private” entities such Walt Disney Co. In such cases it is clear that property rights take precedence over free speech”.

But I still think that economic values can be much easier identified than moral values. This makes the law easier to interpret.

Posted by: Tatjana | December 31, 2011

Topic 14 – The History and Development of Copyright

There was no task for this topic, so I’m just giving a link to my animated video which is somehow related to copyrights and censorship.

Posted by: Tatjana | December 31, 2011

Topic 13 – The Author vs the Information Society

To Do

Read Chapter 3 “Against Intellectual Property” of the Brian Martin’s book. Write a blog review (especially, comment on his strategies for change).


Some of Brian Martin’s ideas are reasonable and realistic, while some are too abstract. As a professor of social sciences, Martin thinks theoretically and philosophically. I am not sure how well his strategies may work in practice.

In the beginning, Martin talks about the original rationale for copyrights and patents. In his words, the very first idea was “to foster artistic and practical creative work by giving a short-term monopoly over certain uses of the work”. However, Martin thinks that this is corruption. The whole system of intellectual property is “one more way for rich countries to extract wealth from poor countries”. Well, I agree with that. There are many cases to illustrate that the Third World falls behind developed countries. But copyright issue is not the only reason. Martin mentioned that it is just “one more way”.

And here rises a philosophical question: maybe poor countries should remain this way? Everyone cannot be rich; some have to deal with agriculture, others with science. Maybe this sounds rough but the law of nature says so. Tigers eat antelopes, and that is a pity. But if one day antelopes do some “magic” and become stronger, all tigers will die.

I think it is fair that when someone creates an intellectual work, he or she deserves the reward. Martin has some doubts about defining “deserve”. I do not. Martin talks about “luck”, but what the word “luck” does in scientific text? There is no such term when we do academic research or proposal. Defining these issues we may speak about probability or theoretical frequency. No one can say what luck is, so it cannot be treated as an argument against intellectual property.

If we assume that luck and talent is something we cannot measure, we have to accept that all intellectual products should be treated the same way, no matter how talented their creators were. For example, if Lady Gaga writes a song, and some guy from the neighborhood strums a guitar, they produce intellectual products of the same value. No, they do not. One receives as much as he deserves. And if someone’s composition is noticed and highly estimated, he/she wants to protect it, because this is the code for success. If the neighbor-guy takes this “code” freely, he will skip the whole way passed by Lady Gaga. Some people may argue it was not the hard way. Well, who cares? I think talent is measured by the result, not by the process.

I do not intend offending anybody, but I hardly imagine someone who stands against intellectual property if he really has own ideas needing protection. If we start sharing ideas, why not to share diamonds or maybe even oil? Why does not Estonia share the forest with Africa? Philosophically speaking, do we deserve this forest?

As for strategies, I will leave comments for the most noticeable ones.

Change thinking

It is always hard to change one’s way of thinking, especially, when there are stable concepts already existing in our mind. Associations between physical and intellectual property are very strong, because they both bring a profit. This is thanks to capitalistic world, where things are being sold and bought. People try to benefit from everything. One goes and sells cookies; one creates a logo for those cookies; one writes a jingle for a commercial for the same cookies. Thus we have three different business values. According to Martin’s logic, there is only one physical property – cookies. However, they will not bring so much money without advertising. Logo and commercial jingle are important motivators for customers. How should the income be divided? I think a designer and musician also deserve money. And if their product succeeds (as in case of Coca-Cola), it should be protected by copyrights. Of course, in this case copyrights belong to a company owner, but he pays his team. And everything which is made during working hours belongs to a company. This is the way of protecting success in general, not the logo or jingle in particular. And this is the fair way, I admit. Why should we change the way of thinking?

Expose the costs

This passage in Martin’s original text may explain why author complains about some countries exceeding others: “A middle-ranking country from the First World, such as Australia, pays far more for intellectual property — mostly to the US — than it receives”. Author feels that Australia is not treated fairly. Why is he so sure that the problem hides in intellectual property? Maybe Australia pays too much, even more than needed. Maybe it is an economical problem. Author does not give any numbers to prove his theory.

Moreover, a system of intellectual property provides jobs for layers, financial managers, secretaries and professors.

Reproduce protected works

Author says that the term “piracy” is seldom used when, “for example, a boss takes credit for a subordinate’s work”. Well, it is totally normal that a boss acts like that, because he pays salary to his employees. Did not they sign a contract? Anyway, this is also his way to protect the success of his company. Employees may leave their jobs, but their intellectual products stay within a company, because they were created during working hours. In other words, boss gives employees work and salary, employees give boss the results. Revealing secrets or further sharing of property may turn crucial to the company.

I do not understand why Martins compares intellectual and physical property after he just said above that we should not treat them as similar. Here he writes: “…illegal copying is not a very good strategy against intellectual property, any more than stealing goods is a way to challenge ownership of physical property”.

All in all, intellectual property is a part of capitalistic world. Changing it entails changing the whole economic system.

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.