monitor disasters in real time

There are natural disasters such as strong storms that cause floods, maritime storms or uncontrolled fires in which observing the evolution is providential both when it comes to sizing the mishap and to draw up a strategy for solutions on the ground. In this scenario, satellites are real lifesavers. So Spain and Portugal are going to launch a “atlantic constellation” of satellites that observe the Iberian Peninsula from space to protect it. The context. It is not difficult to find catastrophes that have hit the peninsula in recent years, as an example is the train of storms with which we began 2026 and whose effect can be seen from space or the DANA that destroyed Valencia. Currently, the reference satellites for forest management, fires and emergencies in Europe are ESA’s Copernicus / Sentinel, which generate images of the Iberian Peninsula every two or three days. What is the Atlantic Constellation. It is a set of 16 small satellites, eight launched by each country, which will orbit less than 700 kilometers from Earth, coordinating to generate images of the territory every two or three hours. It is a complement and not a replacement for the European Copernicus Sentinel. Why is it important. The implementation of the Atlantic Constellation brings an obvious improvement when it comes to evaluating progress and planning solutions to disasters: going from having information every 2 – 3 days to doing it every 2 – 3 hours, practically in real time for this type of disasters. On the other hand and as explained for El Periódico Nicolás Martín, director of Users, Services and Applications of the Spanish Space Agency, this is a project “very relevant for the Spanish aerospace sector and for our strategic autonomy.” And although its main mission is emergencies and natural disasters, it also has applications for other sectors and entities, such as agriculture. How are they going to do it? Spain has awarded their part to the Catalan Open Cosmos through a contest. The company will be in charge of designing and manufacturing the state’s eight satellites, while the ICE-CSIC will develop one of the four payloads of each satellite and the geophysical data extraction algorithms. On the Portugal side, it will be GeoSat who leads the project. The ESA will be the one who supervises everything. On each satellite there will be four instruments: high-resolution multispectral optical cameras to analyze vegetation and terrain, GNSS reflectometry sensors to measure soil moisture or sea state, IoT connectivity and a system to identify and track vessels. The roadmap. The first demonstration satellite will be called Pathfinder and according to the project schedule, it will be ready by the end of this year. It will be launched in the first half of 2027, thus serving to validate the integrated technologies before manufacturing the rest of the units. However, the full deployment of the entire satellite fleet will take place in the following years. In Xataka | Poland and Spain are the European countries that have increased their contribution to space the most. For very different reasons Cover | Photo of SpaceX

Is it a good time to buy a Pixel 10 or will the price drop soon? This is what the data tells us

Given the evolution of the prices of Google Pixel 10we present our assessment on whether or not your purchase is currently appropriate. 🟢 BUY WITHOUT LOOKING BACK google pixel 10 Verdict Excellent moment. It’s only been on the market for six months, but it has been gradually dropping until it reaches its lowest price now. official RRP €899 (Google Store) Target price “on the street” Do not pay more than €649 (amazon) Next release Google Pixel 11 (expected for August 2026) Our recommendation Now is a good time to buy it. On Amazon it is at a very good price (649 euros), but even at PcComponentes you can get it cheaper (619 euros) Regret cost Low. Although when the new Pixel 11 comes out the 10 model will drop in price, it may is not at a price as competitive as the one now offered by PcComponentes. At most you could lose 20 euros, since the Google Pixel 10 is not expected to drop below 600 euros. Why is the traffic light green? They have just passed six months since it was launched the Google Pixel 10 (August 2025) and there is exactly the same time left for Google to launch the new generation. This is a good time for those undecided who are hesitating between waiting for the new Google Pixel 11 or buying the one currently sold. For those who don’t want to wait, the Google Pixel 10 is one of the phones that has received the most offers in recent months (as we have covered in Xataka Selección). Now, with a price of 619 eurosit is one of the best prices at which the current one has been able to obtain Google flagship. Expert Buyer’s Advice: Once a few months have passed, do not buy the Pixel in the official Google store, because the price remains at the price the smartphone had at its launch. Better go to other stores that continually launch offers to get it. Price history and change prediction This graph shows a comparison between the price evolution of the previous model, the Google Pixel 9, superimposed with the trend of the current Google Pixel 10. These are our observations: The Google Pixel 10 has experienced a more aggressive price evolution if we compare it with the Pixel 9. The previous model went on sale for 900 euros and until the fourth month it maintained resistance in price. On the other hand, the Pixel 10 has experienced a price drop of 28% in just one semester, going from costing 900 euros to 649 euros. After a stable start, between the second and third month, the Google Pixel 10 dropped 150 euros and is now stabilized at 650 euros. This figure equals the all-time low that the Pixel 9 took almost a year to achieve. It can be said that the price that the Pixel 10 has now achieved is very competitivesince it has experienced a very rapid price drop and is expected to no longer drop further. Maybe it will reach 600 euros, but when the new generation Pixel is going to be released on the market. The best Google Pixel 10 deals now: For those looking for the Google Pixel 10 without waiting any longer, these are the best current options. Do not forget that, after our publication, the offers may expire or the stock may run out. Currently, the terminal is at very competitive pricesplacing it significantly below the 899 euros marked by its official rate in the Google store. The price could vary. We earn commission from these links When is the Google Pixel 11 released? Time flies and, therefore, it is essential to know the details about what will be the successor to the current Google Pixel 10: Rumors about the Google Pixel 11: There are already leaks about the new Google terminal. It is expected to release a chip manufactured by hands other than the current ones: TSMC. Expected release date: If Google’s trend continues (consolidated with the Pixel 9 and 10), everything indicates that the official presentation of the Google Pixel 11 will occur in mid-August 2026, arriving in stores at the end of that same month. When will the Pixel 10 become “obsolete”?: Despite being one of the most supported phones on the market thanks to its seven-year life cycle, it is true that the launch of the Pixel 11 will introduce the new Google Tensor G6. This processor will be more powerful than the current one, making the Pixel 10’s hardware take a backseat. However, if you decide to buy the Pixel 10 now, you will not be purchasing an “old” model, since its performance will continue to be excellent even after the release of the new version. Is the Google Pixel 10 for you now? If you are considering buying the Google Pixel 10 right now, we want to make it easier for you by helping you a little. ✅ BUY IT TODAY IF: You need a high-end mobile at a good price: You can currently get it with a discount of close to 30% compared to the official RRP in the Google store. You find an offer in which the mobile costs you 649 euros or less: If you find this price, it is the ideal time to buy it. You come from a Previous pixel and you are looking for the latest model: a perfect option if you feel that yours has become outdated. ⛔ I DO NOT RECOMMEND IT IF: Do you wantalways have the latest: There are barely six months until the new Google Pixel 11 is launched; If you are looking for novelty, it will be worth the wait. You can andwait a few months: it is very likely that, in a short time, the price will approach the 600 euro barrier (although 619 euros current PcComponentes are already a very good opportunity). You’re going to pay the price it has in the official store (899 euros): something not … Read more

With Plenitude, the kWh will cost you the same 24 hours a day and, at the same time, you get a gift card for Netflix

If you have an electricity rate with time slotsthe watch is your greatest ally. You probably try to organize yourself as much as possible to turn on the washing machine or dishwasher in the off-peak sections, thus saving money along the way. This creates stress in many homes.especially when unforeseen events arise or there are small children at home. What alternative do we have? A rate where the kWh has exactly the same price 24 hours a day. That’s just what it offers Plenitude’s Easy Ratethat now bring a gift with you in the form of a Netflix gift card. Of course, only if you hire before next March 2. A fee to be able to put on the washing machine (or whatever) without looking at the clock Although it may not seem like it, there is a fairly considerable difference between the price per kW between the cheapest and most expensive hours. If you can use the most demanding appliances at off-peak hours, there is no problem. But, What if you get home at 7 p.m. every day? There you will have to pay the most expensive price, which can make your electricity bill skyrocket. That does not happen with the Plenitude Easy Rate. with her, the price of electricity will be exactly the same all day (at the time of writing, 0.128306 per kWh). This way, no matter how many unforeseen events you have during the day, you won’t have to worry about how much electricity costs at a certain time of day. Furthermore, once you contract the rate, the price of kWh will remain stable for 12 months. This means that, if, for example, an energy crisis occurs that increases the price of electricity, you will continue paying the same. And it does not have any type of permanence, something that not all electricity rates on the market offer. Hiring can be done in several ways, although you have the option of doing everything through the Plenitude website. In this way, you will have a 100% digital process which will only take you a few minutes. We are talking about the Easy Rate for electricity, although Plenitude also offers the same for gas, as well as for having both supplies together. Now it’s time for the promo we mentioned above, active only until March 2. Any of these rates include a 50-euro Netflix gift card that we will receive after the first month of contracting. We can use this for both a new account and one we already have. If we do numbers, it’s great: It gives you almost four months of the Standard plan. Everything together gives us an opportunity to save every month, both on the electricity bill and by removing a subscription for a while. Although yes: only if you hurry and you contract the Easy Rate before March 2. Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Patrick Schneider on UnsplashPlenitude In Xataka | What do you need (according to the EU) for your survival kit and how much will it cost you? In Xataka | Best power banks to charge your mobile phone. Which one to buy and recommended external batteries

There was a time when Japan was the king of TVs. All its giants have ended up surrendering to the evidence

Not so many years ago, talking about Japanese televisions was talking about the kings of the market. Not so much for volume but for quality. The Sony Trinitron were (and still are) to play retro video games) legendary, but there were the technologies of Sharp, Toshiba or the plasma from Panasonic. However, first South Korea and now China have run over Japanese brands. And Panasonic is the latest “victim.” And it may be for the best. The Panasonic case. Bluntly: Panasonic, which was once on the podium of the great Japanese manufacturers, has just announce that the Chinese company skyworth From now on, it will be in charge of producing and selling its televisions. At the catalog presentation event for this year, representatives of the Japanese brand they commented that the new partner “will lead sales, marketing and logistics while Panasonic provides expertise and quality assurance.” Speaking to FlatpanelsHD, Panasonic said Skyworth will take care of everything, but the resulting product will still be one that will have the “Panasonic” name. Turn towards China. The company had been outsourcing the production and functions of its models for years. mid-range and entrybut now that loss of identity is complete. With the move, the firm hopes to once again become one of the largest in both Europe and the United States, and the curious thing is that this announcement comes just a few weeks after Sony will outsource the production of its televisions to TCL. It is a symbolic turn because the Japan that previously led the technological conversation was gradually eclipsed by South Korea, Taiwan and, now, China. Both TCL and Skyworth are Chinese companies and, although TCL is much better known, Skyworth is not exactly small. Headquartered in Shenzhen, it has intermittently strained in the conversation of the main television manufacturers Android TV. It makes… sense. In statements to FlatpanelsHD, both companies will jointly develop the high-end OLED TVsand the movement has a very clear reading: it is a win-win for both companies, but as in the case of Sony-TCL, one wins -much- more than the other. Chinese companies have made a very strong investment in recent years in plants capable of producing an enormous quantity of large-inch panels. Televisions are manufactured from what is known as “mother glass”plates that, the larger the size, the more derived large-inch televisions will be produced. And if more televisions can be produced at a time, they can be sold at a lower price. TCL has state-of-the-art factories focused on that large-inch production, which helps explain why they sell 65- and 75-inch models at ridiculous prices. Therefore, with these associations, the Japanese hope that the muscle of the Chinese will help them achieve greater penetration. But, of course, it is undeniable that the names ‘Sony Bravia’ and ‘Panasonic’ are much more powerful than those of any Chinese brand, and now it is TCL and Skyworth that can exploit it in the market. Tears in the rain. In the end, as they say, of those muds, these muds. Panasonic, which was once one of the spearheads in terms of television technology thanks to plasma, had not made much of a splash for years in a conversation dominated by LG, Samsung and, by leaps and bounds, the Chinese. They were, along with Sony, the stronghold of a Japanese industry that had already seen how giants like Sharp, Pioneer or Toshiba they stayed in the gutter to be, in some cases, rescued by… Chinese companies (Toshiba by Hisense) or Taiwanese (Sharp by Foxconn). As they say, ‘mistakes were made’ and Panasonic held on for too many years to a plasma technology which was impressive, but also very expensive to produce and a huge ship that could not correct course when better LCD and OLED panels began to come out. As we say, we have to wait to see what this translates into in terms of market share, but in Japan it is a blow. Only with the joint venture of Sony and TCL, esteem that 50% of the Japanese market will be controlled by Chinese capital. The last pride they could hold on to was Panasonic. In Xataka |

Telefónica is already selling its minicenters to compete in the era of real time

For years they have told us that the future of artificial intelligence lies inincreasingly larger data centersmore powerful and more demanding in energy consumption. And it’s true that computing muscle matters. But there is an equally determining factor that is talked about much less: distance. In the era of real time, it’s not just how much you process that matters, but where you do it. Every millisecond that data takes to travel can disrupt the ability to react instantly. This nuance, apparently technical, is beginning to become a strategic issue for Spanish companies. Telefónica’s bet. The company has activated the commercialization of its edge computing services for B2B clients in five Spanish cities, Madrid, Valencia, Seville, Bilbao and A Coruña, as part of a broader deployment that includes 17 nodes in this initial phase. This means that companies and administrations can now hire these processing and storage capacities close to the point where the data is generated. Closer data. Edge computing involves processing information where it is generated, rather than constantly sending it to distant data centers. As Microsoft explainsis about moving computing and storage capacity to peripheral network locations, such as factories, stores, offices or distributed infrastructures. In practice, local devices and servers analyze and filter data on site and only send what is relevant to central systems. The goal is to reduce latency, alleviate network traffic and enable real-time responses, complementing rather than replacing traditional cloud. The deployment. Telefónica’s Edge Plan plans to reach 17 nodes in this first phase throughout this year. According to the company, 12 infrastructures are already deployed: to the five with active B2B services, other nodes are added in Madrid, Barcelona, ​​Málaga, Palma de Mallorca, Valladolid, Terrassa and Mérida. This same year, the incorporation of Zaragoza, Las Palmas de Gran Canaria, Gijón, Santa Cruz de Tenerife and Santiago de Compostela is planned. Many of these facilities are located in old copper plants converted into Edge centers, adapted to availability and security requirements. Basic and Smart. Telefónica does not sell “edge” in the abstract, but rather two concrete ways of using it. The first is Basic Edge, a stable layer that brings computing capacity closer to the territory and focuses on data control and compliance with national, regional or local regulatory frameworks. Each node acts as an availability zone, allowing applications to be deployed with additional guarantees of continuity and resilience. The second is Smart Edge, which introduces dynamism: selection of the most appropriate node at all times, creation of instances on demand and operation with FTTH or 5G SA connectivity depending on the scenario. Beyond physical infrastructure. Telefónica integrates computing capacity with GPUs into its portfolio for artificial intelligence loads, available as a service and deployed in Edge nodes. This allows companies and institutions to run high-performance models without purchasing their own hardware and maintaining processing within the defined regional environment. The company also mentions the incorporation of RAG agents and capabilities to adapt models to specific contexts. Overall, the strategy seeks to bring AI closer to data under criteria of sovereignty and regulatory compliance. When the millisecond rules. An example helps to dimension the scope of this architecture. Telefónica developed with CAF a pilot that combines Edge and 5G Stand Alone for the railway sector, providing artificial vision solutions that process data close to the asset instead of depending on centralized infrastructure. According to the company, this approach avoids installing processing nodes in each car and keeps responsiveness at levels compatible with real-time operations. Images | Xataka with Gemini 3 Pro In Xataka | We had suspicions, but Sam Altman has confirmed it: AI is just an excuse to fire

We thought it took us a long time to learn to cook. Until some 780,000-year-old carp teeth rewrote history

If we think about the technology that has most transformed humanity, it is easy for the wheel, the steam engine or the microchip to come to mind in a more current way. However, there is a much older and more fundamental “technology” that literally changed our anatomy: the kitchen. The evolution. For decades, paleoanthropologists have debated At what exact moment did our ancestors stop consuming raw foods to start processing them through the control of fire. The most recent evidence not only rewrites our chronology, but confirms that mastering cooking was the true driving force of human evolution. How do you know? Date something as precise as the beginning of cooking, but the reality is that Until recently, indisputable evidence of the continued use of fire for cooking They were around 600,000 years old. However, a great finding published in the prestigious magazine Nature in 2022 set back this evolutionary clock. In this case it was at the site of Gesher Benot Ya’aqovin Israel, remains of large carp teeth were found. With these samples and through advanced techniques such as X-ray diffraction, the researchers demonstrated that these remains had been exposed to controlled and relatively low temperatures, being less than 500 °C. The first date. With this evidence it seemed quite clear that it was not an accidental fire, but rather that it was dated 780,000 years ago these animals began to be cooked. This is consistent with the fact that Acheulean hunter-gatherers were already exploiting aquatic habitats, selecting nutrient-rich fish and cooking them in what archaeologists call “ghost hearths,” which were structured fire zones. Another hypothesis. Although direct evidence pointed us back to 780,000 years ago, biological clues suggest that the culinary revolution began much earlier. This is what primatologist Richard Wrangham pointed out, in his book Catching Fire and in subsequent studies published in Current Anthropology, proposing that systematic cooking emerged with Homo erectus approximately 1.9 million years ago. Your arguments. To be able to give this date, this expert focuses mainly on energy efficiency, since he points out that cooking predigests food, breaking down fibers and starches. This allows you to obtain many more calories with minimal effort. But the most relevant thing is that by facilitating digestion, the Homo erectus It no longer needed a massive intestinal tract to process hard, raw vegetables. And here size matters, since intestinal tissue and brain tissue are energetically very expensive, and so, by shrinking the intestine, the excess energy could be redirected to the growth of a much larger and more complex brain. But this softer diet also explains why the molars of the Homo erectus They shrank and their jaws became less prominent. Beyond nutrition. The implementation of cooking not only brought anatomical benefits, but studies indicate that in the case of the first hominids, this was essential for roasting raw meat and killing the bacteria that were inside. But in addition, fire control and the ability to process food were key tools that facilitated human migration. In reassessments of classic sites, such as the Zhoukoudian caves in China, they confirm that the Homo erectus pekinensis used controlled fire to cook deer meat in specific stratademonstrating that this practice was essential for adapting to colder climates outside of Africa. Images | Michael Lock

Registrations are plummeting for the first time in 20 years

Computer engineering is a classic among the most in-demand jobs in recent years. In fact, without going any further, it was the university degree with the most job opportunities in 2025. However, it faces an abyss: that of a future with AIwhich we have already seen is beginning to wreak havoc among junior profiles. Generation Z seems to have noticed and after two decades of continuous growth, enrollment has begun to decline in the mecca of computing: the Universities of California. Context. For decades, studying computer science has been a true bastion of employability, whether choosing for the race as in their variants in FP version. That is to say, if you wanted to study a career that would guarantee you a specialized job, something especially interesting for the middle and lower classes, in computing you would find real life insurance. The San Francisco Chronicle has captured the data of the set from California universities in a graph showing that not even the 2008 financial crisis or the COVID-19 earthquake managed to undermine its appeal: computing seemed armored against crises. Computer science enrollments on California public university campuses since 2000. Data: University of California Something is changing in computing. More specifically, data from the University of California shows that 12,652 computer science degree students have chosen computer science this year, this is 6% less compared to 2024 and 9% less compared to 2023. It is true that it is almost double what it was a decade ago, but the decline is clear taking 20 years ago as a reference, the time when there was the last (slight) decline. The data in question does not come from a specific university or from any group: we are talking about the small group made up of the public universities of California, which include illustrious ones such as UCLA, Berkeley or San Diego. Therefore, Stanford does not appear in these data as it is private. Why is it important. Because the perception of computing as a choice that guarantees success is no longer what it was: on the one hand, because AI is taking away opportunities who starts to work and for another, for the big layoffs What we are seeing in big tech. Furthermore, it has brought a paradigm shift: parents no longer encourage studying computer science as much as they do other more classic and tangible engineering: electrical, mechanical… according to what personnel related to admission to these universities know, whose statements collects the Californian media. And this is not just happening in California, but it is a global phenomenon. The Universities of California are leading the way: what happens at Berkeley or UCLA is a preview of what we will see soon here as well. Without going any further, the University of San Diego got ahead last year creating an AI career Let’s see how it goes. Spoiler: it was a total success, like acknowledges Steven Swanson for TechSpotthe department head and computer science professor. kitchen ear. A deeper analysis of data from California universities shows that it is not that there are fewer technology students, it is just that they are changing their choices towards more specific and emerging ones. Between the success of emerging and specialized programs and these data, universities already have a pending issue on the table: carrying out a curricular transition. Classical computing is becoming a transversal subject and not the final destination, that is, it is no longer so important how a tool is built (chopping code), but rather how to think and how to validate it in a future in which its students will have to work side by side with AI. In Xataka | If Spain wants to imitate China and be a “country of engineers”, this map reveals the extent to which it has a problem In Xataka | Studying with AI without thinking teaches nothing: these tips can help you take advantage of it and really learn Cover | Vitaly Gariev

There was a time when Megaupload conquered the world of downloads. And their king was Kim Dotcom: Crossover 1×39

At the beginning of the 2000s there were practically no legitimate alternatives to access film, series or music content through streaming, so there were those who took advantage of the circumstance to propose “dark” options. P2P networks were clearly one of those options, but we also attend at that time at the birth of phenomena like Megaupload. This platform became an absolute internet giant, and its creator, Kim Dotcom, is already a living part of the history of the network of networks. This hacker and entrepreneur managed to put an entire industry in suspense while making gold and living like a king. However, justice ended up going after him, and that spelled the end of Megaupload. The raid that ended with his arrest It became news with worldwide coverage, and that marked the definitive end of that platform. Two years later, Mega would appear, a much more “formal” and less obscure alternative, but Dotcom would end up breaking away from it shortly after creating it. Since then this entrepreneur has become a kind of political activist who tries by all means to ensure that justice I couldn’t unload all my weight against him. Whether he does or not remains unknown, but one thing is certain: the story of Kim Dotcom and Megaupload They deserved their own episode. of Crossover. On YouTube | Crossover In Xataka | Megaupload, rise and fall from grace of the portal that changed downloads on the Internet forever

Science and longevity experts are clear about what time you should wake up

For years, the culture of effort and extreme productivity has sold us the “five o’clock club“like him Holy Grail of successtaking as examples to CEOs, influencers or personal development gurus who point out the need to wake up at five in the morning. However, science focused on aging has a very different message: waking up too early is not only not productive, but it can shave years off our life. The experts. Sebastian La Rosaa doctor specializing in longevity, already pointed out that the optimal time to wake up is in a very specific window: between 6:45 and 7:00 in the morning. And the reality is that the scientific literature supports its claims based on clinical experience quite well. Without going any further, an analysis that lasted for 20 years in large groups of people revealed that the lowest point of mortality risk is exactly around seven in the morning. From this point on, extremes (as often happens in biology) are quite expensive. The extremes. Get up constantly after 8 in the morning raises the risk of mortality from all causes by a staggering 39%. But being a night owl and waking up super early every day isn’t good for your health either. This is what they saw from the data extracted from the UK Biobankwith a sample of more than 433,000 people, showing that the evening chronotype (going to bed late and getting up late) has a 10% higher risk of mortality total compared to early risers, impacting more harshly on people over 63 years of age. More tests. On the other hand, a massive study from the University of Exeter found that people who wake up naturally between five and seven in the morning reduce their risk of premature mortality by between 20 and 25%. This fits perfectly with the recommendation to go to sleep between 10:00 p.m. and 11:00 p.m. to achieve 7 or 8 hours of restful sleep and protect, in the process, cardiovascular health. The golden rule. While 7:00 a.m. seems like the evolutionary magic hour, researchers at Harvard and other pioneering institutions have reached an even more important conclusion: consistency is the most important factor. In this way, having irregular sleep schedules, such as going to bed and getting up at very different times each day, increases the risk of mortality between 20 and 48%. In fact, the regularity of the sleep-wake cycle has been shown to be a stronger predictor of mortality than the total number of hours slept. This forces the scientific consensus to establish that sleeping between 6 and 8 hours is ideal, with exactly 7 hours being the figure linked to greater survival in large population cohorts. But if we choose to sleep less than seven hours or more than eight hours, the body can become unbalanced and increase the risk of death. Hacking the internal clock. Behind all these statistics there are pure cellular mechanics. In animal models, it has been proven that having “high amplitude” circadian rhythms, with very marked differences between daytime alertness and nighttime rest, directly correlates with greater longevity. When this biological clock is altered by living behind sunlight, we alter metabolic pathways critical for aging such as via mTOR, sirtuins or IGF-1. Exposing yourself to natural light as soon as you wake up around seven in the morning is the signal that the brain needs to set this complex hormonal mechanism in motion, mitigating oxidative damage and preventing cardiovascular diseases and cancer. Images | muntazar mansory In Xataka | If you fall asleep in less than five minutes, you don’t have a “superpower”: it’s a warning signal from your brain

In 1968 a man had the idea to create the first tablet in history. The problem is that he was decades ahead of his time.

If I tell you to think of the oldest tablet you remember, you may go back to the first iPad, which was released in 2010 (and, by the way, I turned seven last week). Or, if you’ve been following the world of technology since before the turn of the century, you might be familiar with the Microsoft Tablet PC from HP Compaq that was announced in 2001. In reality, there was someone who already tried to create one and it was much earlier, in 1968before the term “tablet” was even coined. At that time, Alan Kay was a young worker at the Xerox Palo Alto Research Center who had been mulling over the concept of a personal computer for some time (in contrast to the military, business and professional use that reigned among manufacturers at the time). After speaking with other colleagues who were beginning their research on how the programming language Logo could help younger children advance in math, Kay came up with an idea: “This encounter finally made me see what the real destiny of personal computing was going to be. Not a personal dynamic ‘vehicle’, as Englebart’s metaphors had it as opposed to IBM’s ‘railway tracks’, but something much deeper: a dynamic personal ‘medium’. With a vehicle, one could wait until high school to take ‘driving lessons’. But if it was a medium, it had to extend into the world of childhood.” In 1968, Kay created the Dynabook conceptwhich he would spend several years profiling. in the book “Tracing the Dynabook: a study of technocultural transformations” They define it like this: “Kay called it the Dynabook, and the name suggests what it was going to be: a dynamic book. That is, a medium like a book, but one that was interactive and controlled by the reader. It would provide cognitive scaffolding in the same way that books and print media had done in recent centuries but, as Papert’s work with children and Logo had begun to demonstrate, it would take the advantages of the new computing medium and provide the means for new kinds of exploration and expression.” “A personal computer for children of all ages” With the idea of ​​its function clear, Kay then began to shape it into cardboard prototypes (as can be seen in the image at the top of the article). In 1972, the researcher presented his paper “A personal computer for children of all ages” in which he offered more details not only about his motivation and his vision of personal computing at the time, but about the own device that I had in mind. His idea was to get a kind of tablet-shaped personal computer aimed at education. This would have a reduced thickness, a liquid crystal touch screen and a keyboard. Like a regular notebook in size, with a graphical interface (a revolution for the time) that allowed the reproduction of graphics, music and text, and with internal storage for 500 pages. The keyboard would not be the only way to enter information: it could also be done via voice. In the image that Kay drew, the word “stylus” can also be seen, although he did not comment on it in his paper. Kay’s idea is that the Dynabook that could be connect to other systems to “copy” information to it (among them, the ARPA Network) and even predicted the existence of content “vending machines”, which could not be accessed until payment had been made. “The books can be installed instead of being bought or loaned,” he said. Regarding digital “ownership”, Kay said the following: “The ability to easily make copies and own the information yourself is not likely to weaken existing markets, as has happened with xerography, which has strengthened publishing; and just as tapes have not hurt the music industry but have provided a way to organize one’s own music. Most people are not interested in being a source or a smuggler, but rather like to trade and play with what they have.” According to Kay’s calculations, the components to manufacture it could cost $294, so it was not unreasonable to be able to sell it for $500, something expensive for the time. “The average annual amount spent per child on education is only $850,” he said, and that is why he even proposed a different financing model: “perhaps the device should be given away as if it were a notebook, and only sell the content (cassettes, files, etc.). “This would be quite similar to the way TV packages or music are now distributed.” “Let’s do it!” he said to finish his paper. Unfortunately for Kay, the Dynabook never materialized. Despite Kay’s enthusiasm, the Dynabook itself was never manufactured for lack of support at Xerox and due to the technological limitations of the time. Do you remember what computers were like then? Well, imagine what it would be like to build a tablet. Two Xerox PARC engineers, Chuck Thacker and Butler Lampson, asked for permission to try to replicate a similar machine on their own, and so it came to light. Highwhich was also known as “Interim Dynabook”. It was not a tablet, far from it, but it maintained some of the ideas that Kay had raised in her publication. He Xerox Alto was one of the first personal computers of history and Steve Jobs and Apple engineers they were inspired in some of its innovations and concepts, such as the use of a graphical interface for its own computers. Starting at Minute 2:27, the Xerox Alto graphical interface in action Kay is not only remembered for the Dynabook itself, but for the educational vision he gave to the project, for his peculiar vision of the personal computing paradigm and for how he came to anticipate some of the problems (and even technologies) that would come later. Not only that: in 2001, Microsoft presented its Microsoft Tablet PC, a project that Chuck Thacker and Butler Lampson had led. Yes, the same ones who once tried to implement … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.