A Second Helping of Second Life

So, like mentioned in class, let’s talk Ideal vs. Reality:

The Ideal

As I’ve heard it described and have read, the assets that SL brings to pedagogy seem to be focused mostly on accessibility. Ideally speaking, SL lets a student from half a world away have the same experience as a student sitting in their dorm room on campus, if the class is taught within second life. Infrastructure exists to facilitate reasonably educative environments, such as projectors with slide shows, lecture venues, and even large-scale object manipulation for the purpose of displays.

Additionally, Second Life allows, at least theoretically, for robust object creation, limited only by how far the creators are willing to go. A history course could, for example, use dozens of small models on a miniature ocean, timed to move independently in set patterns to explain fleet movements in the Pacific in World War II. Or you could visit a recreation of the Apollo 11 landing site (someplace I’ve been).

The ultimate goal of Pedagogy in SL is to attempt engagement on a level of technological sophistication. I believe what must fundamentally be the ideal is that this is the precursor to future digital environments, so get on board now. Even if it’s only half-realized now, being here now will mean better things later.

The Reality

Tumbleweeds. A lot of tumble weeds.

As we saw in class, and I hope some of you might yet get to experience for yourselves, there’s a lot left to be desired in a professional and pedagogical sense in SL.

Most desirable? People. I had thought I would go through several other virtual campuses in SL, showing everyone what they look like, but the truth is, you saw all you needed to see at Wolverine Island. Every one of the schools I went to was a ghost town, with myself the only inhabitant. It was almost a post-apocalyptic feeling, as if I woke up to find the world intact, but all the inhabitants gone. The nice seating areas on Wolverine Island for the classes that don’t seem to be held made me feel very lonely – the exact opposite of what should be the goal of as socially derived of an environment as SL should be. This extended to a handful of other sites I went to in SL – museums, public art spaces, colleges, municipalities, memorials – all of them empty.

The fact of the matter is that the people are the only real component missing in-world, if you strip away the mundane technical issues, at least in the not strictly-social realm. Without fellow digital denizens, your presence there is just a waste of time.

I’d be interested to read in other blog posts what else strikes people about Second Life, mostly on a professional/pedagogical sense. I came out pretty strong in class against my own technology I’d Facilitated. I’d love to see someone else take up the gauntlet and show me I’m being too hasty.

Admittedly, the night clearly wasn’t a total wash. When else can you say “I saw Daffy Duck bumping and grinding Bugs Bunny in class tonight” other than where Second Life is concerned?

For Good Measure: 01001001 01101110 01110000 01110101 01110100 00101111 01001111 01110101 01110100 01110000 01110101 01110100

The video below shows a promising (but not as “OMG brain-controlled stuff WOOT!” as the referring site would have you believe) early step in bridging the gap between the mind and technology.

While the researchers draw the conclusion that this is revolutionary direct brain-brain communication, I’m not quite on board. First, all this display really shows is a kludge system that allowed someone to transmit rudimentary thought to another human through a mediator (not directly) – in this case, the Internet. Here’s the gist of the research:

Researchers attach EEG sensors to subject 1, designed to understand 1’s imagining of raising either their left or right arm as binary code – a 1 or a 0.

Meanwhile, subject 2, who sits in a different location covered by Mr. Shakey Cam, being flashed by an LED that subtly differentiates between the 1 and the 0 is hooked up to EEG amplifiers.

Somehow, the data encoded in the flashed 1s and 0s is interpretable by subject 2’s brain. Subject 2 correctly interprets a series as transmitted by subject 1. 1-0-1-1 is transmitted, received, and decoded.

What I see in this is actually a bit more promising than brain-brain communication (which this isn’t) but goes overlooked. This is brain-computer communication! Granted this is a data transfer rate that makes a 1200 baud modem seem like light speed, but it’s a start. If one takes for a good thing our impending transformation into a race of silicon-augments, this is a HUUUGE step. Plenty of research shows our brain’s sensitivity to broad effect reading and writing from external parties. But to show that we are capable of encoding a specific thought process that a computer can correctly articulate instruction and meaning from? That’s the encouraging thing to me.

I know I probably seem like I await the Singularity with at least unstable glee, and probably to some observers with unhealthy obsession. The reality (or at least my impression of reality) is that the Singularity isn’t a scary thing for me. I don’t think we’re looking at our impending doom, but instead at the next great step in civilization. Once thought merges so seamlessly with technology, it’ll be the next best thing to being everywhere at once. I’d have to think that such potential would be a transformative power in humanity.

But at a generous 3 seconds per 1 or 0, as seen in the video, it’s going to be a long time until I can transmit 01001101 01110010 00101110 00100000 01010111 01100001 01110100 01110011 01101111 01101110 00101100 00100000 01100011 01101111 01101101 01100101 00100000 01101000 01100101 01110010 01100101 00101110 (that would take just under 10 minutes) without breaking a sweat. (click here if you’d like to decode it.)

Just think how long it would take to convince a murderous robot to spare your life.

Who makes Steve Guttenberg a star?

After reading other conversations about what Linked In means to classmates, I have another shot to take at the topic: Social media as modern-day secret handshake.

(Apologies for this being the best Simpsons Stonecutters video I can find)

In the classic episode of The Simpsons, Homer is unhappy to learn that he’s apparently the only man in Springfield who isn’t part of the Stonecutters, a secret order that apparently responsible for everything that goes on in the world. Homer, at first on the outside, struggles with getting a local plumber to fix a flooded basement. He is only able to secure timely, competent help from the plumber after revealing he is a newly-minted Stonecutter, thereby entitling him to the benefits usually reserved for the elite. Is this what social media, especially Linked In, will do to the professional landscape of the next few years?

Taking a look at this from two points of view, I’ll start first with the apologist’s, which, in full disclosure, probably most closely resembles my own views. I see these new tools of social media/Web 2.0 as the just that: tools. If you believe that genies do not go back into bottles, what’s done is done and we will do well to go with this flow. I have access to the same technology that anyone else does. As the apologist, I use it without further thought to my colleagues’ compatibilities or any concern for voluntary fair play. I understand that there are those who will shy away from this frontier of professional communication, but that’s not really my problem, is it? Ten years ago, access to computers and the Internet was something of a small club; computers were expensive, their usability factor didn’t encourage adoption, and the reward returned for investment of time honestly didn’t amount to much more than a few computer geek friends talking ad nauseum about last night’s hackfest on Diablo. Today, that’s not the case. A perfectly capable nettop computer can be had for under $400, less than some of the technologically-adverse would drop on a month’s car payment. Operating systems are the most accessible they’ve ever been and the robust offering of simple yet effective applications means no one can reasonably be left out in the cold. So throw off your inhibitions about Twitter, Facebook, or Linked In and join the rest of the online world. The connections you can make and maintain will do more than just kill time; they might help put you on an inside track at work. If you’re willing to put in quality e-suckup time, they why shouldn’t you benefit from your technological aptitude?

On the other side of the coin, we have the inclusionists,: Not everyone is capable of or willing to engage in the same level of immersion, and since it doesn’t really reflect on professional aptitude one way or another, isn’t it just another type of Boy’s Only Club? If I’m a (stereotype alert!) twenty year veteran of my office who just doesn’t give a damn about all that online stuff, isn’t it a type of discrimination to penalize me for not participating? That is what you’re doing, in effect, if I can’t connect on Facebook etc. with the others in the office who care about that kind of thing.

Taking from both sides, I can see validity to both points. As admitted before, my probable allegiance would be to the apologists, judging by how easily I was able to place myself in that position. But it’s always easier for the previously-initiated (Digital Native/Naturalized Citizen?) to call the newcomers out for not being willing to adapt. It’s also reasonable for the newcomers to expect equal reward for equal work. But is it really like that? Has it ever not been this way? Precedent doesn’t equate to acceptability, but it does make change more difficult. What would have to/should have to happen to level this playing field. Or is it a false assumption that leveling is desirable?

Oh, and because a shameless Simpsons quote is applicable here if no other time: Carl: “Oh and don’t bother calling 9-1-1 anymore. Here’s the real number.” (Homer is handed a slip of paper reading ‘9-1-2’)

For Good Measure: When the Singularity comes, plz to has mai life?

Just as a quick note for those of you who don’t know what the Singularity is – it’s the theoretical point when artificial intelligence will achieve parity in complexity and capability with human intelligence. It’s perpetually just over the hill, but it is pretty much an inevitability.

A lot of what you hear in the next breath after the term Singularity is Matrix or Terminator style lamentations that humans cannot coexist with a self-aware race of computers. I don’t agree. One of the great differences between us and any sort of AI we can see in the foreseeable future is our predilection for emotion-based overreaction. I don’t mean to be glib, but as long as our existence doesn’t logically contradict existence for such AI, we don’t have much to worry about. Additionally, such popular cautionary tales as we see in literature and film assure that we’re designing AI with any variety of kill switches in mind.

So the more likely reality is that sometime soon(ish), we’re going to live in a world where intelligent computers are a reality. And while they await the completion of their ablative-armored, six-foot tall, red-eyed, Austrian accented physical bodies, our first interactions with them will probably be more like what we saw in class on Tuesday in Apple’s World of Tomorrow style future tech flick. They may even be cloud-residing digital dwellers who are simply our contemporaries from a universe we can’t comprehend. And we’ll be visitors in their native landscape. What kind of reaction will they have to us in this environment?

What implications might this have for our digital universe? Is that where the battle, if there is one, might be waged? We exist in two places at once right now: the digital and the analog. Even if our newly aware digital colleague has a physical body of sorts, it’s not designed to interact with its physical world. And aside from the occasional in-person input they may receive, they will have only our digital presences from which to parse out human behavior. They’ll have our posting on discussion forums to understand our attitudes towards interpersonal relations. They’ll have “Shit My Dad Says” to understand how we see the world. They’ll have millions of unflattering pictures of felines engaged in grammatically-challenged quests for self-discovery and cheezburgerz to understand our humor. That will be their only context for understanding us.

Well, on second thought, we’re all screwed.

Information on the Singularity:

Singularity Summit 2009

Wikipedia

For Good Measure: Integrating fully

As I alluded to in a previous post, I posit that it might not be long before the progress of our technological usage reaches a threshold. Whenever I spout off about this topic, I tend to get funny looks, but I encourage everyone to bear with me.

Think about our relationship with technology in just the past fifty years. If I may be poetic, we’ve progressed from computers filling buildings and only able to run the most basic operations to carrying around smartphones equipped with processors many orders of magnitude more capable than the Apollo XI Command Module’s on-board computers. We’ve witnessed the rise of the Arpanet, the Internet, Web 2.0 (a term I still kinda dislike), and real-time photo-realistic graphics rendering. More information can stream across global distances faster each day. And with Moore’s Law, this capacity doubles every 18 months to 2 years.

As far as our behavior goes, we are the most digital-socially integrated group of humans ever, and that too shows no likelihood of abating. We constantly devise new ways to use technology to keep in contact with our circle of friends, monitor our world and create new digital worlds to escape into. So, I ask – absolutely, 100%, I kid you not – seriously: How long until we become a race of cyborgs?

Just to be absolutely clear: I do NOT mean this.



It’s not a big leap. In fact, with the above trends factored in, what used to be a big chasm is rapidly shrinking into what is soon to be a mere crack in the sidewalk. If we continue carrying more powerful and capable technology around with us, increasing our reliance on digital maintenance of our knowledge base, and going where we can with nanotechnology, it can’t be long. Our generation might be cautious but convincible, and generations older probably won’t adopt, but what about those yet to come? There will come a point when I won’t be content with physically interacting with a device when I know a more accurate, more efficient route exists directly with my brain. It’s started with increasingly portable yet capable laptops and phones. The next step will be wearable but separate tech that makes these functions more permanently accessible. After that will be technology that passively observes our nervous system operation to intuit our will into physical manifestations. ( Actually, this device might already do that.) After we’ve contented ourselves with using trained subconscious eye movement to access and display information on a HUD before our eyes, what else could be the next step?

Integration isn’t going to happen overnight. In fact, a lot of these steps have already been taken in crude ways to assist those with injuries or disabilities manage some life functions through alternative means. As we continue to chart these technologies and guide them into more effective use, it will pass the mark of parity with the biological and on to technological superiority. At that point, elective adoption of these technologies will increase and humans will gradually alter themselves for the simple objective benefits afforded.

I won’t commit the classic folly and assign a specific timeframe for when I think this will come to pass, but I will offer that I sincerely believe it will be well within my lifetime. When we reach this threshold, what reason will we have as a culture to ignore it and say we will go no further?

Personality Faceting

In response to Tuesdays class discussion and Dr. Schirmer’s article, “The Personal as Public: Identity Construction/Fragmentation Online,” I’d like to propose another way of looking at how we present ourselves online: facets.

The analogy works pretty well for all aspects of our interpersonal lives, actually. Let’s use a precious stone like an emerald for our example. The stone starts off kind of unremarkable, really. Gemstones in their natural states aren’t the clear, sparkly bursts of light we associate with their end product. Assuming you have a stone of appropriate clarity and worth, it still needs to be cut and polished. It takes a skilled lapidary to study the stone and see the gem within, and a slow, careful process to whittle away just the right amount in the right places. The lapidary makes choices based on the application (pendant, ring, earrings) about how each facet should be oriented to get the best use of light from the gem. While the facet everyone focuses on – the one that is displayed most prominently to the world – gets all the attention, the supporting facets throughout the rest of the stone are crucial. Channeling light to the clearest, most pristine part of the internal stone and away from the imperfections is a task made possible by all the supporting facet working as a whole, not just the most prominent facet alone. If faceting is done incorrectly, the gem would be little more than the dull, unremarkable stone it started as.
Now think of the selves we present to the world. I firmly believe that every presentation of ourselves, both digital and physical, is part of a whole. We start life as an uninteresting wiggly lump and soon begin the faceting process. It starts out external until we’re old enough to chart these paths for ourselves, but the process leads to the same end. We choose the single best side of ourselves to display most prominently, but rely on the lesser noticed aspects – sometimes our imperfections – to give support and context to our public face. This extends online just the same. It’s not that we’re different people in different online settings, it’s that we use different facets of our personality in different settings, and its always in service of the whole personality.

Digital Natives and our inevitable assimilation

Regarding the Prensky article, “Digital Natives, Digital Immigrants,” I’m going to explore some things I didn’t get to on other classmates’ blogs.

That we have two types of digital denizens is of crucial importance to our digital culture. I think that we have an equilibrium that is going to steadily shift over the coming decades as the last of the immigrants dissipate through … let’s just call it attrition. With each year, the number of natives will rapidly increase while the number of immigrants will steadily decrease. (Aside: in what other scenario could the immigrants ever get out-numbered?)
I think this will have a steadily freeing effect on our digital communications. This isn’t to say that the immigrants have a chilling effect by intent, but rather by necessity does the digital landscape have to not take some things for granted. As our culture becomes vastly dominated by the native (those who’ve never not had a digital life), the inherent mistrust of those things digital will abate. That you have a Facebook account (or whatever our future equivalents will be) or maintain a significant online presence would be considered a default assumption. Accommodation for the disinclined at first will wane, and eventually will be rare.
In short order, we’ll be as accustomed to our lives with a completely integrated digital self, it will be practically like existing in multiple dimensions at once and we won’t even notice.
And this will be one of several gateways to full cybernetic augmentation. And this will start off another dichotomy: The Integrated Natives & Integrated Immigrants. But I digress…

Instructor at University of Northern Colorado. Compositionist, rhetorician, husband, gamer, cat guy.