With AI, Microsoft has once again insisted that we talk to our computer: experience says that we don’t feel like it

You get up in the morning, go to work and sit in front of the computer, but the first thing you do is not pick up the mouse and keyboard, but say “Hey, Copilot”. Can you imagine it? Me neither, completely, but that is Microsoft’s clear obsession: to get us to talk to our PC instead of using the usual peripherals. That futuristic vision is striking, but it faces several enormous challenges. what memories. The thing about Microsoft and other technology companies with their intention for us to talk to machines goes back a long way. The first generation of voice assistants precisely pursued that goal. There we saw how Alexa, Google Assistant and of course Cortana tried to make us talk much more with our devices. We were not prepared to talk to machines. Its success was rather limited, and even Nadella himself admitted in 2023 that, for example, those “smart” speakers They were “dumber than a stone”. In Xataka Voice assistants and the fight to gain our trust Cortana tried. The Redmond company certainly tried to make Cortana successful. It offered it on both Windows 10 and on Android and iOS…and even the sadly defunct Windows Phone. Over time the company realized that that assistant was not a good fit, and was killing him little by little. The launch of ChatGPT was used by Microsoft to raise your new assistant powered by AI and definitely kill to his first assistant: Copilot wants to be what Cortana could never be. Who asked for this? With that “Hey, Copilot” the same thing is happening as with Cortana: did someone ask Microsoft to integrate a voice assistant into Windows? The voice assistants of that first generation were relegated to residual use, and Amazon suffered this problem firsthand. He bet billions of dollars that Echos would become devices we wouldn’t stop talking about, but most people I just used them to set timers and music. AI promises to go much further. But in spring 2024 we live in a hopeful moment for this type of technology. OpenAI launched GPT-4o and demonstrated that natural conversations with a mobile phone were not only possible, but also They were very powerful. AI could be ours confidant and companion -with controversy included— or our private teacherand as others later wanted to demonstrate, it could also do things for us just by talking to her. Let them tell you to the vibe coders. But we still have a hard time talking to the PC. Since then it certainly seems that we have become a little more accustomed to talking with our smartphone, but things seem to be different on the PC. The statistics reflect that 77% of young people use their voice on their smartphone, while only 38% of them do so on the PC. “But everyone on the PC listens to me”. There is also a sociological component in this use of voice on the PC. The mobile phone is more intimate and personal, while the PC is often used in a static setting in which there are people around who can capture what we say. Furthermore, in the physical context, the unspoken rules of coexistence—do not disturb, do not invade others’ acoustic space—outweigh the promise of comfort. And then there is distrust. Microsoft is not helped by its recent history, especially with Recall, that option that seemed really striking and ingenious but ended up being delayed to generate a great controversy regarding privacy. The launch of the new Windows 11 options, with “Hey, Copilot” as the main protagonist, does not seem to have been received with too much enthusiasm, and the tone, for example, of the comments from this long thread It is skepticism. Rivals focus on mobile phones and speakers, not the PC. The truth is that the adoption of voice as a way to interact with our devices does not seem to be particularly viral. The erratic launch of Alexa+ does not seem to be providing great advantages, Apple continues to make itself wait with its renewed version of Siri, and only Google has taken a step forward with Geminialthough not clearly on the desktop. Talking to machines works, but not as much on the PC as on the mobile. {“videoId”:”x9jvzns”,”autoplay”:false,”title”:”Project Astra Exploring the Capabilities of a Universal AI Assistant”, “tag”:”Project Astra”, “duration”:”116″} A triumph for accessibility. Where there is a clear use scenario for this technology is in the area of ​​accessibility. For users with reduced mobility, the ability to dictate or control the device with their voice can be transformative. This need is concrete and well defined, however: it does not justify a general redesign of the interaction or a marketing campaign that tries to get us all to talk to the computer. The voice should solve things, not be a fair trick. Microsoft’s real challenge is not technical — the technology is there — but human. The company must convince people that talking to the PC makes sense. To do this, it must address three fronts: privacy, the social context—that you don’t mind talking to your PC—and of course, that said interaction has practical use and works. For example, they come in there Copilot Actionswho will have to demonstrate – like everything else – that Microft is on the right path here. Otherwise, “Hey, Copilot” could become the new Cortana. In Xataka | Sundar Pichai (CEO of Google) believes that ‘Her’ is inevitable: “there will be people who fall in love with an AI and we should prepare ourselves” (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); } })(); – The news With AI, Microsoft has once again insisted that we talk to our computer: experience says that we don’t feel like it was originally published in Xataka by Javier Pastor .

“We don’t see a real demand”

BMW Motorrad director Markus Flasch, It has been quite resounding When asked about the development of electric motorcycles by the company. And it is that the manager has ruled out that the firm will be in charge of the manufacture of large -cylinder electric motorcycles in an immediate future. In An interview granted to the American media Common Treat during an event in the Austin, Texas circuit, Flash made it clear that the electrification strategy of the motorcycle division will follow a very different path from the brand’s cars. Without demand. “At the moment we do not see a real demand for electric motorcycles”, assured Flasch in the middle. The manager, who previously directed the M Performance division on the automobile side of BMW, explains that the motorcycle client is fundamentally different from that of cars, and the global regulations are not the same either. Where do they bet on the electricity. BMW does not completely abandon the electric terrain, but concentrates its efforts on urban mobility. The company currently leads the electric scooters market above 11 kW and has recently launched the CE 02an electric cyclomotor of urban design that has been well received. According to Flaschit is precisely in this segment where they see the tendency towards total electrification. “We see an emerging trend in urban mobility, which we believe will finally be totally electric,” said the manager. A more affordable sports on the way. Although Flasch was prudent with the details about future releases, yes dropped a track Interesting about the sports segment. When asked if BMW would develop a middle displacement sports motorcycle, similar to what other brands are doing to capture customers with more affordable and less intimidating products, the Austrian replied: “You can expect to see something smaller than the 1000 RR of 1,000 cc, but it is a bit soon to talk about it.” The track points to a possible sport equipped with the parallel bicylinder of 895 cc that already use models such as F 900 XR. There will be no motocross motorcycles. While brands like Triumph and Ducati have decided to enter the world of motocross and enduro, BMW has made the decision not to follow that route. Flasch Explain That, although the brand is famous for the GS and organizes demanding competitions such as the GS Trophy, the type of off-road looking for its clients is different. “We have analyzed entering this segment and we made a conscious decision of not following this path,” he said, adding that both clients and the dealership network support this decision. The future of six online cylinders. One of Flasch’s most interesting statements was his commitment to the online six -cylinder engine that equips the models K 1600. “The six online will have a brilliant future if you ask me,” assured The manager, who sees this engine as a single hall of BMW’s unique identity along with the boxer. The fans base of these models is solid and stable, which has convinced the brand of making more focus on this platform. It is an interesting bet at a time when other brands are simplifying their ranges and abandoning complex motor settings. The challenges of the sector. Flasch also talked about the challenges facing the motorcycle industry. For the manager, the biggest problem is that these vehicles, at least the type that BMW manufactures, are “lifestyle products” and not mainly mobility. “What goes against this, and what probably makes the global industry go down a little this year, is uncertainty and economic barriers,” he explained. And is that The tariffsexchange rates and technical barriers between markets can be great threats for a global manufacturer like BMW. Cover image | BMW Motorrad In Xataka | Xiaomi bought three Tesla Model and with a single objective: shred them to understand why they were the best. And they have learned

They are experts in apps, but they don’t know how to use a printer

The Z generation is changing Many labor dynamics With its incorporation into the labor market, but they are also breaking high expectations. The most visible is the technological preparation that was assumed to a generation that has had to finish its remote training and started its work career in a remote work and fenced by AI. That Lack of preparation technological has been made known as Tech Shame or technological shame: Not knowing how to use elementary devices in an office such as printer or scan a document. Technological pressure for digital natives. Unlike boomers, The members of the Z generation have been born and grown immersed in technology for any aspect of their life. However, they experience unexpected difficulties when they face basic offices such as printers or scanners. Despite his technological skill in personal life, A study De lasalle Network reveals that 48% of young graduates in 2022 do not feel technologically prepared in their jobs. According to the report ‘Hybrid Work: Are We There Yet?‘ From HP to 10,000 office employees around the world, 20% of young people felt judged for not knowing how to use the electronic devices of the office, while the proportion of their counterparts from other generations was only 4%. “We were surprised to discover that young workers are feeling more ‘technological shame’ than their older colleagues, and this could be due to a series of problems,” He pointed out Debbie Irish, Head of Human Resources in the United Kingdom and Ireland of HP to Worklife. Unfair expectations for “technological shame”. Although the fact of not knowing how a certain device works has affected to a greater or lesser extent to all generations that currently live in the labor market, it seems to be an unforgivable sin for generation Z given the expectations that they should Dominate any technology. However, we must not lose sight of the fact that your experience is limited to technologies with which they grew up. Printers, photocopiers or scanners are already an obsolete technology for that generation. It would be like asking Boomers either Millennials make purchase orders in MORSE CODE. Printers are less and less common at home. There was a time when having a printer at home was as common as having a computer. However, the data indicate that this trend has been stagnating in recent years and is increasingly printed at home in response to the Nightmare to face a printer. A good sample is that, according to data HP sales published by The RegisterDuring the pandemic its income in hardware for consumption printing (non -professional) grew 21% to 77 million dollars. That means that all those people who had to start working remotely did not have that printer at home previously. It is normal that they know how to use them. Not being used to using printers or scanners during their training period has made, no matter how much Digital natives Let them be, do not have that previous technological experience. On the other hand, as pointed out An article in The Guardianthe Z generation has grown using apps such as Instagram or Tiktok, very oriented to the ease of use, so they expect the rest of the technology in their surroundings Be just as easy to use. Maybe this is one of the reasons why young people are more prone to scams online That his grandparents. What lifts the hand the Millennials They would know how to send a fax, without looking in Google what a fax is. In Xataka | Three out of four young people in the US are clear: they prefer to work in a hospital than in a great technological Image | Pexels (Andrea Piacquadio) *An earlier version of this article was published in September 2024

“Young people don’t want to work here.” The solution to the problem was there since 1914

Henry Ford was not only A bold businessman which founded an automobile empire, was also the cornerstone that Revolutionized cars manufacturing and a strategist of the economy. For this reason, it is not strange that Jim Farley, the current CEO of Ford, found in the founder of his company the inspiration to solve a serious labor problem. As Farley himself told during An interview With the writer Walter Isaacson, Biographer of Steve Jobs and Elon Musk, who during the 2019 union negotiationshe visited some brand factories to directly ask his employees. The CEO confessed to Isaacson that the most veteran told him: “Young people do not want to work here. Jim, you pay 17 dollars an hour, and they are very stressed.” As Farley explained during their interview, the workers told their boss that the new workers, most of them temporary workers, worked for eight hours at Amazon or other works, and then making their turn in Ford sleeping just three or four hours to be able to come to the end of the month due to precarious salaries. A decision inspired in 1914 The Ford template was in a complicated situation since the low wages were moving the youngest, who preferred other ways to obtain a sufficient salary to livewhile the average age of Ford’s fixed personnel was increasing and the vacancies that were leaving They were not covered. In this critical context, Farley decided to pull internal newspaper library and look at what Henry Ford himself did more than a century earlier. The legendary founder of Ford doubled in 1914 the daily salary of the operators at $ 5 per day, much more than average at that time. Ford did not upload the salaries of its employees for a sudden goodnessbut with a very clear logic: “I do this because I want my factory workers to buy my cars. If they earn enough money, they will buy my own product,” Henry Ford confessed. According to Farley, applying this measure “was not easy. It was expensive. But I think that is the type of changes we need to implement in our country.” As a result, the company made temporary workers full timewhich allowed them to access higher wages, participation in profits and better medical coverage. In addition, temporary workers managed to reduce the time they should be working for Ford before opting for fixed job. The objective of the measure implemented by the current CEO of Ford is exactly the same as Henry Ford’s in 1914: ensure a stable, well formed template and be able to retain the best talent to improve the productivity of its assembly lines. Not only is money, it’s also training Despite the salary improvements that were signed in Ford, Farley was convinced of the need to have a good professional education for young people. According to published Fortunein the next decade about four million operators will be needed for the manufacturing industry as the current operators are retired. This scenario is not exclusive to the US. In Spain the demand for qualified labor He shoots In sectors like construction or the renewable energies And there is not even enough young people to cover itnor have they necessary training. “In Germany, all the operators of our factories have an apprentice from high school and each position requires about eight years of practical training,” said Farley, who was convinced that this model ensures the generational relief and quality of the staff. In Xataka | If the question is whether having a university degree improves the employment situation, the data leaves us a figure: 5.7% unemployment Images | FordUnspash (THISISENGINEING)

You can make money with your GPU when you don’t use it. It is enough that you lend it to those who train AI models

To execute and offer tools based on generative artificial intelligence, a lot of calculation power is needed (and that leads to a lot of energy). Therefore, the most powerful market cards and specific processors for Datacenters They are so quoted today, hence companies such as Nvidia, which specializes in this market, are reaping such an overwhelming success. And since not everyone can afford a powerful graphics card to experiment with AI, there is a service that we see more and more common: to rent a graphics card to remove an extra money. There are several platforms to get it and under these lines we tell you everything you need to know. How the business works. The model consists of acting as a host in a Marketplace where clients are looking for GPU instances for their AI projects. You set the price per hour, the platform manages payments and the client executes their work in an isolated container on your machine. You could say that it is like an Airbnb, but focused on computer hardware. Instances with an rtx 4090 in vast ai Numbers that we must take into account. An RTX 4090 is usually Between about 0.20 and 0.60 dollars per hour in these marketplaces, depending on the demand. In the best theoretical scenario, operating 24 hours a day for a full month, a high -end GPU could invoice around 240 gross dollars monthly (considering that we put it for rent 24 hours a day). But reality is usually more modest, since we have to discount what we pay on our electrical bill, the platform commissions (which can reach 24% in Platforms like Runpod) and, above all, that real occupation is rarely 100%. Expanding market. The price difference between traditional cloud giants (AWS, Google Cloud) and these P2P marketplaces is considerable. While renting a GPU on AWS can cost three or six times more, platforms such as Runpod or Vast AI offer access to very powerful graphics cards, as is the case of RTX 4090, for a few cents the time. And of course, these prices are really attractive to developers who want to experiment with artificial intelligence but do not have means to have a team comparable to the projects they work on. What you should know before starting. Turning your PC into a rental server is not plug-and-play. In most cases you need Install Linuxconfigure updated NVIDIA drivers, open network ports And keep your team working for the hours for which you have committed to offer it, together with adequate refrigeration, which will be necessary if your GPU is going to start working much more and for much longer. In addition, your customers expect the machine to be available when they hire it, which means that you will not be able to use it for gaming or personal work. It should also be noted that the income generated is also subject to taxation and it is possible that it is required to register as an economic activity in cases where income exceeds a certain threshold. There are certain risks. Beyond the wear that the hardware can receive for being constantly working, there are maximum performance, there are some security concerns. Although platforms use containers to isolate workloads, some experts warn about possible Vulnerabilities in multi-tean environments (those environments that serve several users) that could compromise our data or use the GPU to improper purposes. Is it worth it? For most users with a single GPU, the benefits are modest once all expenses and others are discounted. Now, the business makes more sense if you already have the amortized hardware, do not pay too much on your electrical bill and accounts with certain technical knowledge to maintain the stable system. Even more if you have a potential graphics card or level for datacentes. As an experiment or complementary income experiment it can be interesting, but do not expect it to make you rich. First steps. If you want to try it, start with offers “interruptibles“, that is, cheaper but that can be canceled, in order to know the real demand. Vast.ai and Runpod They offer detailed documentation to become host, including step -by -step guides and preconfigured templates. Of course, it is advisable to always control real electrical consumption and establish operation limits to prevent your equipment from becoming a slave to the background processes. Cover image | She Don In Xataka | Nvidia, TSMC and SK Hynix are the most powerful chip companies on the planet. None can allow any of the others to fall

If you don’t have 100,000 million, or try

OpenAI He has reviewed his spending projections until 2029: It will burn 115,000 million dollars, 80,000 million more than expected. Why is it important. This astronomical figure reveals two brutal realities about the AI ​​career. The cost of training models and maintaining the infrastructure has exploded beyond any forecast. Openai is raising such a high entry barrier that only technological giants or companies with unlimited access to capital can compete. In perspective. The 115,000 million Openai pale before what they plan to spend their competitors. Only in 2025: Goal will invest 70,000 million. Microsoft will disburse 80,000 million. Amazon will reach 100,000 million. Among the three will add 250,000 million in a single year, more than double what Openai projects to spend on five. Of course they are giants that allocate the income of their main businesses (advertising, cloud, electronic commerce, etc.) to invest in AI. Openai is in another category. The money trail. Openai has gone from projecting an expense of 6,500 million this year to more than 8,000 million. Next year the figure will double up to 17,000 million, 10,000 million more than estimated. By 2027 it will reach 35,000 million annually and in 2028 the 45,000 million. The company is trying to control these exorbitant costs developing their own chips with Broadcom And building own data centers instead of renting capacity in the cloud. Yes, but. Openai faces an existential dilemma. You need to continue raising increasing financing rounds to keep the expense rhythm, with Valuations that already range between 300,000 and 500,000 million dollars. Any stumbling block or adoption could cause a downward round that would be devastating. Meanwhile, Microsoft, Meta and Amazon can burn this money without blinking. They have huge cash flows of their main businesses and unlimited access to capital markets. The threat. He Stargate project Openai with Oracle and Softbank, valued in up to half a billion dollars, reflects the scale of the bets. But even with these partners, OpenAi depends on external investors while their competitors finance the race with their own benefits. What is happening. The industry has entered a phase where only the financial muscle matters. It is not so much a technological career and economic resistance. OpenAi can have The best model Today, but Microsoft has business distribution, goal has 3,000 million users and Amazon dominates cloud infrastructure. The 115,000 million Openai have set the minimum entrance price to the elite AI club. It is a clear message to the rest of the sector: if you cannot afford to lose 100,000 million dollars, you don’t even bother to try. In Xataka | We have calculated how much money the Big Tech are being spent on data centers. The numbers are dizzy Outstanding image | Xataka

AI has become the best example that if you don’t pay for the product, you are the product

They said it in The documentary ‘The dilemma of social networks’: If you are not paying for the product, you are the product. Surely you have heard the phrase more than once in reference to the apps and services that we do not pay with money, but with another type of currency. Social networks, browsers, GPS apps … almost all They collect our information staff and navigation data to do business. Artificial intelligence apps are not different, in fact, there is a career to attract more and more users and forge alliances to get data with which to continue training their chatbots. Improve experience. “When you share your content with us, you help our models more precise since it can better solve your specific problems,” this phrase is taken from the Chatgpt Privacy Policy And in the one of Gemini There are other similar ones. Anthropic was the only one who held the conversations with Claude privately, but Yesterday they announced that they changed their policy. Our personal information, use data and especially conversations serve to continue training and improve. We can disable this option if we want, but we have to be we who do it. By default it is activated. Data shortage. AI needs many data to be trained, many. With the first language models all kinds of content were used, including Content protected by copyright as books or images of works of art. But the data is not infinite. At the end of 2021 there was already a talk of a shortage problem already early this year, Elon Musk said that AI had already consumed all human knowledge. This is a problem for the advance of AI and would be responsible for The rhythm has slowed down. Solutions. IA companies have had to look for life to get new data. OpenAI transcribed a million youtube hours To train GPT-4, Google decided that it would use Any information published on the Internet to improve your AI and Musk believes that the future is in the synthetic data generated by AI itself. But there is something that can also be used to train AI and they are our conversations with it. With the first models they did not have so many users, but currently the volume of data that users generate is much greater. Only Chatgpt has 800 million usersit sounds like a booty too juicy not to use it. Giving away. The conversations we have with ChatgPT can be useful, but it is even more useful when they are data from more specific groups. They count on Rest of World That to achieve this, IA companies are forging alliances with other companies and organizations that allow them to access data that cannot be achieved Web scraping. Openai has been associated with Shopee and offers its Plus plan for VIP users in Indonesia, Vietnam and Thailand. Google offers its free Gemini Pro plan for a year to students in India and Perplexity Pro is available free through operators such as Movistar or Airtel in India … These alliances increase their user base and provide real data of consumption of specific groups, which allows them to train their models more precisely. The Chinese case. China is an example of how to access specific data can give advantage when developing effective solutions. Pharmaceutical research companies that use AI have access to the national health system data, which covers more than 600 million people. This gives them a competitive advantage and has made Chinese companies sign Millionaire agreements with large pharmaceuticals. Worry. Experts such as Sameer Patil, director of the observer Research Foundation, calls to establish a clearer regulation, especially in sensitive sectors such as health or finance: “participating companies will have to ensure that data sets are not personalized and anonymous,” he says in statements to Rest of World. Image | Chatgpt In Xataka | Goal goes for all with AI: announces a data center almost as large as Manhattan and up to 65,000 million investment

How to create a podcast from a text to study, investigate or simply listen if you don’t feel like reading

Let’s explain How to create an audio from text wearing artificial intelligenceso that you can have a kind of podcast to study or review something when you do not feel like reading. Thus, instead of having to be reading a text, you can simply relax while listening. We are going to start the article with a series of previous tips and recommendations to take into account before putting the task. Then, we will tell you how you can do it with one of the best tools available for it, and we will continue with two ways to do it with assistants of AI as Chatgpt And some other alternative that is also interesting for Generate voice with artificial intelligence. Before starting, things to take into account Before starting to get to work with this, there are some things that you must take into account. First, you must define whether this audio or “podcast” that you want to create is educational, informative or only entertainment. Try too Be clear if you are going to listen to only you or also some other person, in order to polish how well finished it is. A written text does not always sound natural when you listen to it, especially when you have longer and somewhat intricate phrases. That is why it can be recommended that The phrases are short and close languagenot something that makes you heavy listening. In cases where you are going to be able to do it, Decide tone and voice stylechoosing between one more formal or informal. This may depend on the content you want to consume in audio, and also if you are going to listen to only you or other people. It is also recommended that You take care of the audio duration. The recommended duration may vary, because if it is something dense or to study it may be that from 20 minutes begin to get tired, but if it is for leisure or something narrative you should not have problems with longer durations. If it is something dense, try to segment the content well. And finally, Always check the result and do not be afraid to try again If you don’t convince you. Because sometimes the voices of AI may sound unnatural or there may be errors, and that is why it is important to check it even if it is a bit. In the end, The most important part is in the script That you will create to be narrated, and in this it is what you will have to spend most of the effort. Try to be natural, with good scores and well structured. Audios from your notes with notebooklm The first tool you can use is Notebooklm from Google, a service with which you can use artificial intelligence to organize your sources, and even Create an audio summary As if it were a podcast. It is a kind of chatgpt, but in which everything you do will be based on the sources you have added by hand. You can use notebooklm Through its official website or by mobile applications. The official website is Notebooklm.google.comor also in their apps In Google Play For Android and In the App Store For the iPhone. You can use it for free, but its payment version will give you more audio summaries. The first thing you should do with this tool is to create a notebook or work space. Inside, In the left column you can add sourceswhich are the files that the AI ​​will use to obtain the information. They can be text documents, slides, PDF, YouTube videos or links to web pages or online items. Once you have added the sources, Go to the section of Studiowhere the tool will create an audio summary of the content of all files. With this you will create your personal podcast To get information about everything you want. How can you do it with chatgpt and others AI Another option is to use a generative AI such as Chatgpt, COPILOT, Gemini either Deepseek. These tools They do not allow you to create a downloadable audio file although this is something you can do with other third -party tools. What you can do is listen directly to the AI ​​app. What you have to do is Create a summary script of an article. Then, this script can be heard directly in Chatgpt, Gemini or other platform, or take it to another AI that generates a downloadable audio. Let’s explain everything step by step. To start, you have to tell Chat GPT to make you summarize an article or a web page. For that, You must attach it and include the prompt or the instructions to generate your script. The article from which you want the summary can be included by uploading the text file or PDF, or directly putting the website. The prompt that we have used is as follows: “I want to create an audio to listen to a summary of the content of this website. I want you to generate the script to then copy and paste it into a program that passes from text to audio. The script has to be narrative, without structure, you simply have to write it to read it from there. (Link)” As you can see, in the prompt we have reiterated that the text generated by chatgpt It must be natural and should be read directlybecause sometimes if you do not mention it, you can tend to generate a script scheme with things to be filled, and what I want is to generate a text to copy and paste. Here, It is you who decides how and what do you want to make the summarybeing able to be one or several files that you raise or add. Just remember that Everything you upload or vintages will be saved on servers of the company that owns the AI, so be careful in the event that you are adding sensitive data that you … Read more

Europe and Japan advance unstoppable towards nuclear fusion. His last achievement reminds us why we don’t have it yet

The experimental reactor of nuclear fusion JT-60SA resides in Naka, a small city not very far from Tokyo (Japan). Its construction began in January 2013, but did not do it from scratch; He did it taking as a starting point the JT-60 reactor, his precursor, a machine that came into operation in 1985 and that for more than three decades has reached very important milestones in the field of merger energy. The JT-60SA assembly ended at the beginning of 2020, and since the end of 2023 it is ready to start The first tests with plasma. This machine is a device Tokamak that like jet and The future iter It resorts to the magnetic confinement of the ionized plasma that contains the deuterium and tritium nuclei to trigger nuclear fusion reactions. Whatever this machine is titanic. Colossal. In fact, it has a height of 15.4 meters and a diameter of 13.7 meters. However, the most shocking are the “specifications” that allow us to train an idea about their performance. And it is able to confine a plasma with a volume of 130 m³, as well as to generate a 2,25 teslas toroidal magnetic field and hold a current inside the plasma of 5.5 mA (5.5 million amps). These figures are shocking, and presumably when Iter is ready to start the first tests with plasma their figures will be even more impressive. Of course, during the next months already measure that the reactor JT60-SA deliver its first results we will develop with great detail. JT-60SA already has one of the most advanced diagnostic systems that exist On April 22, the latest components needed by Japanese and European engineers who work in the reactor to assemble the Thomson dispersion diagnostic system arrived at the JT-60SA facilities. Every time the researchers operating this very complex machine carry out an experiment with it need to know with the maximum possible precision the temperature and density of the plasma electrons. The components of the Thomson Dispersion Measurement Team have been designed and manufactured in Italy, Romania and Japan The main problem they face is that it is not possible to obtain this data taking direct measures. In order for the merger of the deuterium and tritium nuclei to take place, it is necessary that the plasma that contains them a temperature of At least 150 million degrees Celsiusand any sensor that contacts him at this temperature will not survive. This is the reason why the engineers of the JT-60SA reactor have been forced to set up an extraordinarily sophisticated diagnostic system. The components of the Thomson dispersion measurement team have been designed and manufactured in Italy, Romania and Japan. Broadly speaking, this ingenuity manages to measure the temperature and density of plasma electrons analyzing the light that emits with a high -power laser beam dispersed, precisely, by the plasma electrons themselves. Somehow the interaction between the laser and plasma is what allows engineers indirectly calculating temperature and density. The JT-60SA reactor will have two diagnostic systems of Thomson’s dispersion. The nucleus has been developed in Japan, and the edge of the plasma has been devised in Europe. Both are currently being installed, and, if everything goes well, this machine will have in a few months one of the diagnostic and measurement equipment more advanced that exist. The nuclear fusion no longer raises any challenge from the point of view of fundamental physics. If we still have no commercial fusion energy reactors, it is due to the fact that this technology still requires solving several challenges in the field of engineering. The tuning of this diagnostic system was one of them. Image | QST More information | Eurofusion In Xataka | The Jet reactor has successfully completed its final tests with deuterium and tritium. It is a crucial milestone for nuclear fusion

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.