quinta-feira, 8 de novembro de 2007

Bibliografia para o workshop - Entrevista com O' Reilly

People Inside & Web 2.0: An Interview with Tim O’Reilly

in: http://www.openbusiness.cc/2006/04/25/people-inside-web-20-an-interview-with-tim-o-reilly/

Tuesday, April 25th, 2006

OpenBusiness spoke with Tim O’Reilly about the evolution of the Web and its most current trends, which are commonly labeled as Web 2.0. In September 2005, Tim wrote a seminal piece that presented many of the aspects of Web 2.0 and now surrounds much of the buzz around a new generation of internet applications. In the interview, he re-emphasizes the most important points of this development, talks about the evolutionary relationship between open & free and shares his vision of bionic systems that combine human and computational intelligence.

OB: At OpenBusiness, we’re especially interested in the rise of open content and open services and how they deal with the concept of “free”. How do you define that relationship? When are open and free the same and in what ways are they different?

For the last couple of year, I’ve been preaching an idea that Clayton Christensen first wrote about and called the “Law of Conservation of Attractive Profits.” We talked about it in response to my talk, the Open Source Paradigm shift, in which I focused a lot on lessons from the IBM PC.

What I saw was that IBM – through genius or accident or both – introduced this new, open architecture for a personal computer: anyone could build one and that was open hardware. It was not Open Source as we know it today but it was pretty close. IBM said, “Everything has to be built with off-the-shelf parts from at least two suppliers, here is the specification, now go out, be fruitful and multiply.” The unintended consequence of that decision was that it took all the profits out of assembling computer systems, which had been the source of great profits in the past. IBM was a completely dominant company and now we have low-margin players like Dell. But we also ended up with high-margin players like Intel and Microsoft, neither of which IBM foresaw. They signed a deal with Microsoft to do the operating system, Intel got control of a key component and ended up with near-monopoly profits, all while IBM struggled for many years. They have come back now but they had destroyed the computer industry as they knew it, replaced it with a new one, and there was a period other where –at least from the point of view from IBM – all the profits were disappearing from the system.

So when I started seeing comments by Ballmer saying Open Source is an intellectual property destroyer and it’s taking all the profits out of the system, I thought this is just what had happened before. We’re seeing the commoditization of software where the value is going out of many classes of software that people used to pay for. But it’s being rediscovered and moving up the stack and it’s moving down the stack. That led me to the couple of new ideas that we now call Web 2.0: the Internet as a platform, information businesses using software as a service, harnessing collective intelligence – that’s moving up the stack. Down the stack is what I call “Data as the Intel Inside.” This stack model is repeating itself as this economic model is repeating itself, and so I think that each time you see something becoming free, something else is becoming expensive, which goes back to the Law of Conservation of Attractive Profits.

Software became free, content even became largely free but now Google and Yahoo are collecting enormous sums of money by directing attention to their free content using a platform that’s built on top of their free software. Similarly, we look at Napster and thought that all of music would be free and now Apple has a billion dollar business selling songs. We’re also just a the early stages where Skype is making telephone calls free and Asterisk and making telephone calls free –relatively speaking– and I believe that there will be new sources of revenue that will be overlaid on top of that market.

I also think that it’s really easy early in a market with distributive innovation to see everything becoming cheap or free or commoditized and not to see the areas where there are new sources of control and new sources of revenue.

OB: Especially in the context of Web 2.0 business models, there has been a lot of emphasis on the ad-based model, which now supports everything from Wi-Fi to your mail account. What other layers do you see on top of that and are there alternate models that emerge?

Oh, absolutely – it actually goes back to this idea of “Data as the Intel Inside”. We look at all these mapping applications for example, in which Navtech and TeleAtlas are licensing data to Google, Yahoo and MSN where those companies are monetizing it by advertising but the data suppliers are monetizing it by license. The economic ecosystem is often much more complex than what people realize because I don’t think that it’s just an ad-supported market.

Ads are one way of collecting money but they’re far from the only way and if you look at the complexity of the web ecosystem, there are all kinds of people who are participating. All of those free bloggers are actually paying their blogging service provider or their ISP for hosting, as an example of the different models that start to work together and build any complex ecosystem.

OB: As you mentioned before, much of Web 2.0 is about user-generated content and harnessing collective intelligence. What were some of the catalysts that drove the web in this direction recently and what has sparked these recent shifts?

I wouldn’t say that anything really sparked it. Instead, we talk of network effects, by which networks grow as a result of the value of the connections they make. The internet always had this characteristic that its value was driven by the number of nodes and all the emergence of user-generated content and harnessing collective intelligence is just an expression of that fundamental dynamic.

What really happened was that the original Web had all of these characteristics: it was from the edges, it was bottom-up, it was long-tail. But then we had this detour where traditional content companies and people who are imitating traditional content companies decided that it was all about publishing, “content is king” and that this would get all the eyeballs that would be monetized by advertising – that was the dot-com boom and bust. But when the dust cleared, you saw that some companies had managed to survive. Pets.com was gone but here was Yahoo, here was Google, here was eBay, here was Amazon. All these companies that survived and we asked ourselves back when we first coined the term Web 2.0, “What distinguishes them?” In one way or another, they had rediscovered the logic of what makes Internet applications work – they had understood network effects.

Overall, there are certainly defining moments. For Google, it was Overture coming up with the advertising model, which put together Google’s user demand engine with a financial model. There was also the insight that you don’t just study the contents of documents but what people do with them as evidenced by the links they make.
If you look at eBay, it’s pretty clear that they had leveraged network effects in a fairly fundamental way too. Pierre [Omidyar] has this idealistic vision of a system he’s building in which buyers and sellers learn to trust each other.

Amazon also is a great example I keep bringing up because their system didn’t have a built-in architecture of participation; but they still worked it! On every page, they invite their users to participate, to annotate their data and to add value. They effectively overlaid an architecture of participation on a system that doesn’t intrinsically have one. In many ways, I think they’re the best company to study because they worked it whereas the other companies mostly locked into a sweet spot.

So as far as turning points go, the real one came when Tim Burners Lee introduced the world-wide web and everything else has just been a voyage of discovery.

OB: Since those earliest days, the Web has been an open platform but over the years, especially more recently, there has been the emergence of companies like Google and Yahoo that have started to centralize more and more data, attention and now also user-generated content like photos and videos. Is there are an increasing trend towards more centralization on the Web today?

Yes and No. On the one hand, the Web is extraordinarily good at decentralizing data: everyone has their own website with their own location and storage. Some sites have managed to become large aggregators for a certain class of data, such as the various photo sharing sites or music sharing sites for example.

But when you really think about centralization vs. decentralization, the biggest aspect of centralization actually comes via large-scale aggregators like Google – because it doesn’t matter whether you put your data on Google or on your own site: you’re still putting it on Google in the end as they’re indexing everything.

The real lesson is that the power may not actually be in the data itself but rather in the control of access to that data. Google doesn’t have any raw data that the Web itself doesn’t have, but they have added intelligence to that data which makes it easier to find things.

To me, one of the seminal applications that made me think seriously about the Internet as Platform was Napster in contrast to MP3.com. I had visited MP3.com not long before Napster appeared and they were proudly showing me their servers with “all this music” on them. But then the kid who grew up in the age of the Internet came out with Napster and asked, “Why do you need to have all this music in one place? My friends already have it and all we need is our set of pointers.” It’s that evolution from data to metadata that’s really interesting to me and where people are going to get access to it.

There are some cases where a certain type of data is hard to generate, as in Digital Globe launching a satellite to supplement the US satellite data or NavTech driving the streets for 500 millions dollars plus to build a unique database –that’s one source of control. But the aggregators – the Yahoos, the Googles, the Amazons – are the other type of control with data that they don’t actually own but which they control with the namespace or the search space or some higher-level metadata.

I think that we’ll find in some ways that this is the real secret of the relationship between free and non-free content. There will be so much free content that it’s going to be hard to find and those who can help you find what you want will be able to charge for it – in one way or the other, whether it’s through advertising or through subscription or something else. It’s about managing to find “the best”, and “the best” is a kind of metadata.

OB: What developments potentially worry you in this space?

First off, I think there will always be negative developments. All new technology goes from its wonderful use when all things seem possible and then, [Tim laughs] we get the blue screen of death – that’s a natural alternation. When bad things happen, they’re just a part of the evolution and of the ongoing cycle.

What worries me the most are governments getting involved and backing their existing companies. The patent system is a great example where the government is clueless and is disrupting the real activity of the market. We see it in the way that the Digital Millennium Copyright Act is trying to protect the interests of existing players while stifling the future. All of this is going to drive innovation to markets in countries that are more forward-looking because the internet is of course a global phenomenon and if you outlaw something, it will simply crop up somewhere else. So our challenge as an industry and as an economy is to discover the rules by which we can create value and ultimately create wealth in this new environment. It’s not about protecting the old ways of creating wealth but rather that creative destruction has to take place. Although companies may suffer from it, I think we’ll all be better for it.

OB: What upcoming developments excite you most and what do you see missing currently which you’d like to see grow?

I have been thinking a lot about “bionic software”, a concept that was introduced by You Mon Tsang Juman Zeng with his start-up called Boxxet, by which people are becoming components in software. I’ve talked about this for a number of years and I believe that Amazon’s Mechanical Turk might have been indirectly inspired by a talk I gave there in May of 2003. I talked about the Turk and asked, “What are the differences between web applications and PC applications?” Web applications have people inside of them. You take the people out of Amazon and it stops working. It’s not a one-time software artifact, instead it’s an ongoing process where people have to do things everyday for the software to keep working. So I referred to the Mechanical Turk, the chess-playing hoax which had a man inside, as a metaphor for the difference between internet applications and PC applications.

Amazon has given it a new twist and so have many other applications by harnessing the users to perform tasks that you couldn’t do with just the computer. And there is a really interesting thread there because for a long time, many people thought that we were going to arrive at some kind of artificial intelligence where we get the computers to be smart enough and match people. And what we’re doing instead is building a hybrid system, in which the computers make us smarter and we make them smarter – that’s bionic software.

When Google gives you 10 results and says, “One of these might be what you’re looking for”, it leaves us with the last mile. When a website uses a little CAPTCHA block, it’s asking that we do something that’s easy for humans but hard for computers when it comes to authentication.

The tag cloud also, which has spread from Flickr to all kinds of other websites, is a user-interface element that is basically built by the users of the system as the system is being used. So we are the software component that generates the tag cloud – we’re the input – and the tag cloud is a metaphor for this new kind of software.

OB: And to close what’s been a fascinating interview, I’m curious what you saw in the last month or two that stood out to you and sparked your curiosity.

There’s a site that’s essentially a “Hot-Or-Not” for avatars in virtual worlds [http://RateMyAv.com/] where you can put up your character from Second Life or World of Warcraft and get it rated by users just like the Hot-Or-Not site [http://www.HotOrNot.com/]. That was really interesting to me because it showed that the real and virtual are interpenetrating further. We’re going to see many of the things that took place on the web increasingly recapitulate themselves in some of these virtual worlds. There’s a real opportunity because many economic models out on the web could obviously be reproduced. It’s a cool, little signal of a future to come…

Bibliografia para o Workshop: Terranova, T. (2004), Network Culture. Politics for the Information Age

Ver aqui uma recensão de:

Terranova, T. (2004),
Network Culture. Politics for the Information Age, London: Pluto Press.


http://www.metamute.org/en/Network-Culture

............................

Network Culture


9 February, 2005 - 00:00

By Steve Wright

Steve Wright reviews Network Culture: Politics for the Information Age
by Tiziana Terranova

"Tiziana Terranova is a name familiar to readers of Mute. Issue 28 carried a lively and informative discussion between Terranova and Marc Bousquet, addressing the contemporary university as both node of accumulation and site of social conflict.1 Of her other writings to date, pride of place goes to an influential essay on the peculiarities of that labour which capital has sought to subsume to its digital economy.2

Now we have Network Culture, an important work that deserves to be read and discussed widely. The book is rich in its scope: in particular, in the fruitful confrontations and collisions it sets up between internet culture and contemporary movements against global capital. At the same time, it is not always an easy read, given the complexity of some of the issues addressed and arguments advanced, and the familiarity presumed with a wide range of debates. Fortunately Terranova writes well and takes her readers seriously, so that the insights provided repay persistence with some of the book’s more difficult passages.

Network Culture offers a series of distinctive and original arguments, while finding inspiration in a range of different critical perspectives. In a fundamental way, however, Network Culture is very much an engagement with many of the key themes dear to the post-operaista (post-workerist) theories that emerged from the wreckage of the Italian autonomist movement of the 1970s. These theories have become familiar to English-language readers, above all through the writings of Michael Hardt and Antonio Negri. Given the fascination with such ideas today in activist milieux, Network Culture is likely to find readers in circles well beyond the academy.

The first chapter explores a number of implications thrown up by Claude Shannon’s ‘classic’ conceptualisation of information in terms of the signal-noise relationship within a conduit linking sender and receiver. At this theory’s heart is a reading of the transmission of information as ‘the communication and exclusion of probable alternatives’ (p.20). What is so enjoyable about Terranova’s account here are the implications for political work that she draws out from her critical reading of this conduit metaphor of information. As Network Culture illustrates, the notion of communication that stems from this metaphor attempts to narrow the field to ‘alternatives formulated on the basis of known probabilities within the constraints set up by the interplay of code and channel or medium’ (p.25). If this is so, then what Terranova calls ‘a cultural politics of information’ must entail not merely a battle over the meaning of what currently informs us within late capitalism: It involves the opening up of the virtuality of the world by positing not simply different, but radically other codes and channels for expressing and giving expression to an undetermined potential for change. (p.26)

The second chapter of Network Culture explores a number of online practices such as packet switching, and asks whether these can help us resolve some of the more vexed problems within contemporary forms of political engagement. For example, does the internet’s open architecture – which in the face of difference, forgoes uniformity in favour of communication protocols – have something to tell us about challenges in terms of ‘extensibility’ currently facing movements against global capital and war? Here Terranova also reminds us how much online practices themselves have changed over the past decade since the takeoff of the World Wide Web, particularly in terms of community formation. The central chapter of the book is a slightly reworked version of Terranova’s essay on ‘Free Labour’. Beginning with a tilt at Richard Barbrook’s arguments concerning online anarcho-communism, the chapter grapples with the net-related unpaid labour performed outside the wage relation. Terranova is insistent that this labour, in all its pleasurableness for those concerned, is ‘a desire of labour immanent to late capitalism’ (p.94), and that claims about the anti-capitalist potentialities of movements such as open source must be offset by a healthy dose of scepticism. The fourth chapter follows on from this, discussing different aspects of that ‘soft control’ which attempts to turn labour’s potentialities towards capital’s continued reproduction. Network Culture then closes with a brief but enticing exploration of some of the key features that mark out ‘the virtual movements of this early twenty-first century’ (p.156), with the question of communication once again to the fore.

As should be obvious, Network Culture is part of a broader debate, and the book’s bibliography provides some helpful pathways into that wider discussion. Given the book’s central themes, it would be useful to examine its arguments alongside those of Ron Day, who has likewise engaged both with post-operaista theory, and information theory ‘classics’ such as Shannon, Weaver and Wiener.3 More provocatively, it would also be useful to read Network Culture alongside Doug Henwood’s latest offering on the ‘new economy’, and Ursula Huws thoughts on a growing ‘cybertariat’, both of which seek to meet capital’s claims about its new ‘weightless economy’ head on.4

As with any text worth reading, there is much to argue with in this book. Those not enamoured of the ‘immaterial labour’ thesis advanced by the post-operaisti will be perplexed by some of Network Culture’s arguments, not least the assertion that the work of writing/reading/managing and participating in mailing lists/websites/chat lines … falls outside the concept of ‘abstract labour’ (p.84).

In a similar vein, Terranova offers the following tantalising statement about another key post-operaista concept:

Unlike class, however, a multitude is not rooted in a solid class formation or a subjectifying function (although it is also a matter of class composition) (p.130).

She elaborates a little on this: the category multitude is of necessity ‘vague’ in that it seeks to denote something that while ‘not deny[ing] the existence of the stratification of identity and class’, nonetheless threatens to reach beyond them (p.130). Is this a case of wanting to have your cake and eat it too? How exactly might class composition analysis prove useful here? This question is not answered directly in Network Culture, even if a range of suggestive beginnings are provided in the second half of the book.

In the concluding paragraph of her original 2000 essay on ‘Free Labour’, Terranova argued as follows:

As the spectacular failure of the Italian autonomy reveals, the purpose of critical theory is not to elaborate strategies which then can be used to direct social change. On the contrary, as the tradition of cultural studies has less explicitly argued, it is about working on what already exists, on the lines established by a cultural and material activity which is already happening.5

Perhaps I also want to have my cake and eat it too, but why shouldn’t we aspire after both these goals? Certainly we don’t need strategy in the sense of some predefined pathway to salvation laid down from on high by specialists, whether these claim to be ‘theorists’ and/or ‘leaders’. But couldn’t strategy encompass a collective attempt to develop some sense of the directions in which we’d like to head, together or apart? Or at the bare minimum, some sense of what it is we seek in various ways to move away from? If so, then Network Culture can be seen as a stimulating contribution to the ongoing SWOT (strengths, weaknesses, oppurtunities, threats) analysis of contemporary power relations in and around online networks. We should all look forward to Terranova’s future offerings to the development of that ‘inventive and emotive political intelligence’ (p.157) which is so sorely needed today. a

1. T. Terranova & M. Bousquet, ‘Recomposing the University’, Mute issue 28, Summer-Autumn, 2004
2. T. Terranova, ‘Free labor: producing culture for the digital economy’, Social Text 63 Summer, 2000 http://www.uoc.edu/in3/hermeneia/sala_de_lectura/t_terranova_free_labor.htm
3. R. Day, The Modern Invention of Information: discourse, history and power, Carbondale, IL: Southern Illinois University Press, 2001
4. D. Henwood, After The New Economy, New York: The New Press, 2003 ; U. Huws, ‘The Making of a Cybertariat: Virtual Work in a Real World’, New York: Monthly Review Press, 2003
5. T. Terranova, 2000, op. cit. "

Tiziana Terranova, Network Culture: Politics for the Information Age, London: Pluto Press, 2004 £14.99

Steve Wright
is a lecturer in the School of Information Management & Systems, Monash University, and the author of Storming Heaven: Class Composition and Struggle in Italian Autonomist Marxism, London: Pluto Press, 2002

.................

Em complemento ver também:Justificar completamente

Terranova, Tiziana. (2000). Free labor: producing culture for the digital economy. In Social Text, 63, Vol. 18, No. 2. http://www.btinternet.com/~t.terranova/freelab.html

sexta-feira, 21 de setembro de 2007

Web 2.0:definição, características e exemplos - por Ana Neves

Web 2.0: definição, características e exemplos

por Ana Neves

Julho, 2007


"As ferramentas sociais estão para ficar e vão mudar para sempre a forma como as pessoas usam e esperam poder usar a Internet. Por essa razão, vale a pena tentar perceber mais sobre o que é, afinal, isso das ferramentas sociais e da web 2.0.

Muito se tem falado ultimamente da web 2.0: uma nova "versão" da Internet que vem possibilitar uma interacção mais próxima do tipo a que estamos habituados presencialmente.

A profusão de sites assentes nas ferramentas sociais que compõem essa "nova" paisagem virtual tem crescido exponencialmente. Possibilitam níveis e padrões de interacção, partilha e troca de opinião até recentemente apenas possíveis offline. A imaginação é quase sempre o limite e muitos têm sido os sites que casam o canal "Internet" com as ferramentas sociais para oferecer funcionalidades nunca antes possíveis.

Alguns desses sites:

del.icio.us – local para arquivo e partilha de sites favoritos (site do mesmo género em Portugal: Tags no Sapo)

Digg – site composto por notícias encontradas pelos utilizadores e por eles sugeridas como de interesse / qualidade (site do mesmo género no Brasil: rec6)

Flickr – site para partilha e pesquisa de fotografias tiradas pelos próprios utilizadores (site do mesmo género em Portugal: Fotos no Sapo e no Brasil: 8p)

My Space– comunidade que permite encontrar pessoas com interesses semelhantes e partilhar ideias, fotos e vídeos

Netvibes – crie a sua própria página com o conteúdo de que gosta

Patient Opinion – site onde os cidadãos podem falar sobre a sua experiência em instituições de saúde britânicas

Technorati – pesquisa de blog posts e tagged social media

Twitter – site usado por pessoas em todo o mundo para informarem outras, amigas ou não, sobre o que estão a fazer em cada momento

Wikipedia - uma enciclopédia escrita em colaboração pelos seus leitores

You Tube – site que permite aos utilizadores ver e partilhar vídeos (site do mesmo género em Portugal: Videos no Sapo)

Zoho – aplicações de texto, folhas de cálculo e muito mais disponíveis online para trabalho colaborativo com outros utilizadores (site do mesmo género no Brasil: Aprex)

Estes sites foram listados por ordem alfabética e foram escolhidos com base na sua popularidade mas com o intuito de exemplificar a variedade de utilizações das ferramentas sociais.

Mas, afinal, o que é isso da web 2.0? Segundo a versão portuguesa da wikipedia (ver caixa) "Web 2.0 é um termo cunhado em 2003 pela empresa estadunidense O'Reilly Media para designar uma segunda geração de comunidades e serviços baseados na plataforma Web, como wikis, aplicações baseadas em folksonomia e redes sociais. Embora o termo tenha uma conotação de uma nova versão para a Web, ele não se refere à atualização nas suas especificações técnicas, mas a uma mudança na forma como ela é encarada por usuários e desenvolvedores".

Assim, a web 2.0 tem essencialmente a ver com a criação de ambientes propícios à criação e manutenção de redes sociais (abertas ou fechadas, públicas ou privadas). Este espírito estende-se para além das paredes de um determinado site, sendo que cada vez se mais se observa o estabelecer de ligações entre vários sites com o objectivo de proporcionar funcionalidades adicionais aos membros das respectivas comunidades.

É devido a este objectivo de abertura e transparência que a web 2.0 se caracteriza também, em grande parte, pelo caracter gratuito (da maioria) dos sites e ferramentas e pela criação e disponibilização de APIs (Application Programming Interface, interface de programação de aplicativos) que permitem a comunicação com outros sites. Estes têm, ultimamente, resultado na criação de múltiplos plugins, desenvolvidos essencialmente pela comunidade de utilizadores, e que permitem extender a funcionalidade básica de um determinado site ou aplicação e/ou agregar conteúdo.

Os elementos / funcionalidades geralmente presentes em sites da web 2.0 são:
* blogs (na versão portuguesa, blogues) - sites em forma de diário no qual os textos são apresentados por ordem cronológica inversa
* social bookmarking – sistema de bookmarks (ou, em português, favoritos) acessível a partir de qualquer computador com acesso à Internet e que permite comentá-los e partilhá-los com outras pessoas
* wikis - sites cujo conteúdo é adicionado e mantido por quem o visita
* tagging – possibilidade de associar um (ou mais) termo(s) ou palavra(s)-chave a um item de conteúdo (e.g. texto, foto, bookmark)

* RSS feeds – (RSS, Really Simple Syndication) forma de alertar os membros / visitantes de um site de alterações no seu conteúdo. Estas feeds produzidas automaticamente por muitas das ferramentas disponíveis podem depois ser lidas através de feed readers online (e.g. Google Reader - www.google.co.uk/reader), no desktop (e.g. RSS Bandit - rssbandit.org) ou associados a uma aplicação-cliente de email (e.g. Attensa - attensa.com).
* agregação de conteúdo - disponibilizar num site conteúdo publicado noutros sites com o intuito de facilitar o acesso (e.g. Netvibes) ou de o enriquecer com a opinião de outros utilizadores (e.g. Digg).

Nota: Este texto terá continuação nos próximos meses de forma a trazer exemplos de como as ferramentas sociais podem usadas também no contexto organizacional e como se relaciona com a gestão de conhecimento".


In:
http://www.kmol.online.pt/artigos/200707/nev07_1.html

sábado, 8 de setembro de 2007

Hi5 é a rede social mais visitada pelos portugueses

Estatísticas das redes sociais em Portugal


"Durante o primeiro semestre deste ano, 77,6 por cento dos internautas com mais de três anos, residentes em Portugal Continental, acederam a redes sociais online, o que corresponde a 2,3 milhões de utilizadores, mostram os dados da Marktest.

No total, foram vistas mais de mil milhões de páginas em comunidades virtuais entre Janeiro e Junho, com cada internauta a ver, em média, 465 páginas. O tempo despendido nestes sites, por utilizador, foi de 3 horas e 14 minutos, o que perfaz um total de 7,5 milhões de horas ao longo do semestre.

Tanto homens como mulheres apresentam comportamentos semelhantes neste campo. Porém, a diferença assenta no horário escolhido para visitar estes espaços virtuais. As mulheres lideram as visitas entre as 11 horas e as 23 horas, enquanto que os homens apresentam uma taxa de acesso superior a partir dessa hora e durante a madrugada.

O Hi5 é o site que atrai mais visitantes liderando a tabela com perto de 2,02 milhões utilizadores únicos ao longo do primeiro semestre. Os internautas nacionais acederam a 673 milhões de páginas,dedicando-lhes 4,9 milhões de horas.

O Spaces.msn.com foi o segundo classificado em utilizadores únicos, com 993 mil utilizadores, e o MySpace terceiro, com 571 mil.

No que toca ao número de páginas visitadas o destaque foi para o site Pt.netlog.com (124 milhões), a terceira posição foi assegurada pelo orkut.com, que registou 110 milhões de páginas visualizadas.

Por seu turno, o segundo lugar em tempo dedicado pertenceu igualmente ao Pt.netlog.com, que obteve 737 mil horas em visualizações. O fotolog.com ficou em terceiro nesta categoria com cerca de 628 mil horas de navegação".


2007-07-18 17:01:00


In:
http://tek.sapo.pt/4Q0/758352.html

sábado, 30 de junho de 2007

"The rise of social software" by Michele Tepper

"In this age of tech industry retrenchment and reorganization, and the busting of DotCom dreams, it's surprising to learn that one area of Web software development—now known as "social software"—is more vibrant and active than ever.

Social software refers to various, loosely connected types of applications that allow individuals to communicate with one another, and to track discussions across the Web as they happen.

Many forms of social software are already old news for experienced technology users; bulletin boards, instant messaging, online role-playing games, and even the collaborative editing tools built into most word processing software all qualify.

But there are a whole host of new tools for discussion and collaboration, many of them in some way tied to the rise of the Weblog (or "blog").

New content syndication and aggregation tools, collaborative virtual workspaces, and collaborative editing tools, among others, are becoming popular, and social software is maturing so quickly that keeping up with it could be a full-time job in itself.

What's more, social software, especially the popular Weblog (or "blog") publishing tools, is gaining notice by the larger players on the Web.

Google recently purchased Pyra, creator of the popular Weblog tool Blogger, and added "Blog This!" as an option on its Google Toolbar.

AOL has announced that it will launch its own Weblog tool for its more than thirty million subscribers this summer.

Soon blogs—perhaps the first native publishing format for the Web—may become one of the most important prisms through which we understand the online world, since they and their relatives in collaboration and group discussion tools may become our primary way of interacting with one another online".

in Michele Tepper "The rise of social software", in netWorker
Volume 7, Number 3 (2003), Pages 18-23. http://doi.acm.org/10.1145/940830.940831