Connect with us

Technology

The dark side of longevity prediction technology

Published

on

On the dark side of longevity prediction technology

 The dark side of longevity prediction technology. Scientists have been developing tools to measure lifespan for some time; Can death prediction technology have a dark side?

 The dark side of longevity prediction technology

On the dark side of longevity prediction technology

In this article we’re going to read about the dark side of longevity prediction technology. If you could predict the moment of your death, would you do it? I don’t know about you, but when we look at human history, the answer to that question has been mostly yes. For example, during the Neolithic period in China, soothsayers predicted the future from the cracks of animal bones or the ancient Greeks from the flight of birds; Mesopotamia even tried to see the future in the entrails of dead animals. Throughout history and in different cultures, we humans have looked to the stars and planetary movements, to weather patterns, and even to coffee grounds and palm lines to ensure our happiness and longevity.

However, it was only three hundred years ago that the art of divination was combined with Abraham de Movar’s mathematical calculations and probabilities and became a little more scientific. This French mathematician tried to determine the date of his death by doing equations; However, accurate predictions of the time of death did not come true until long after his.

Read More: Strange stories of the world of technology

In June 2021, the wish of two Movars finally came true; Scientists managed to discover a reliable tool to measure life span. A team of researchers at the DeCODE Genetics Center in Iceland, using a large collection of data from 5,000 samples of body proteins from about 23,000 Icelanders, created a tool to predict the time of death, which, according to them, is “the amount of time left in a person’s life.” It measures An unusual claim that undoubtedly raises questions about the method used in this research, ethical issues, and the meaning of life.

Members of the research team were Kari Stefansson and Theodbjörg Eriksdottir, who discovered that certain proteins in DNA are linked to longevity and that different death factors have similar “protein profiles”. Eriksdóttir claims that their tool can examine these profiles in a simple blood sample and see, so to speak, the hourglass of the remaining days of a person’s life in the blood plasma.

These two scientists call these indicators of life span “biomarkers” and according to them, there are 106 biomarkers that can be used to predict the time of death regardless of the type of disease. But the factor that distinguishes this research from similar research is its very large scale. In fact, the process that this team achieved (SOMAmer-Based Multiplex Proteomic Assay) is capable of simultaneously measuring thousands of proteins in blood samples.

Certain proteins in DNA are associated with longevity

In this part of the dark side of longevity prediction technology we’re going to read about certain proteins in DNA that are associated with longevity. Of course, the result of all these measurements is not to reach the exact date and time of death; Rather, this tool enables medical professionals, with just a syringe needle and a small blood sample, to accurately identify patients who are more at risk of death than others, as well as those who are at a lower risk of death. This tool may not resemble the crystal ball that is used in fantasy stories to predict the future, but it is a launching pad for perfecting this technology. The designers of this algorithm hope to help doctors and people around patients with difficult decisions from artificial intelligence calculations completely based on logic and away from any emotional dimension.

For Decode Center researchers, the ability of biomarkers to predict the lifespan of large parts of the population at the same time is very impressive. “Using only one blood sample from each person, large groups can easily be compared in a standardized way,” Stephenson said of the study’s clinical trials.

But can this standard treatment method adapt to the vastly different needs of patients? What happens when such technology with AI-based algorithms is used outside the laboratory and in real situations? By the way, to answer this question, we don’t need to settle for speculation, because we saw an example of it in the Corona epidemic. It was in the wake of the pandemic that mortality forecasting data was used on such a large scale for the first time, further revealing the deeply troubling depths of machine computing.

In October 2021, a study at the University of Copenhagen showed that a specific protein on the surface of the cell could potentially predict who is at risk of serious infection from the coronavirus. When this protein biomarker was used, it was able to determine with 78.8% accuracy which people are most affected by corona disease. On the face of it, this seemed like great news, because it identified patients who needed care more than others, and kept the more resilient and healthier people on the waiting list for longer. But when corona patients filled the ICU wards and hospitals faced a shortage of beds and equipment, these data were used in the opposite direction of this expectation; That is, who is more likely to survive and can receive medical help, and who has little hope of survival and should be denied medical equipment.

When healthy and able-bodied people think about the possibility of predicting the time of death, science fiction scenes from movies such as Minority Report or Terminator come to mind and they imagine a future where death and disease are prevented before they happen. But for people with physical disabilities or terminal illnesses, death prediction technology is a reminder of the harsh reality that, in certain circumstances, medical assistance will be withheld in favor of those who have a better chance of survival. A science that wants to predict the length of life, like it or not, is accompanied by a judgment about the value of life; The longer the lifespan, the higher the quality and value of life. Or if we want to look at it from the other side, the quality of life of people who live less is higher than people who are supposed to live longer.

Death prediction technology can never measure the value of life

Another part of the dark side of longevity prediction technology issue is the death prediction technology that can never measure the value of life. The severity of this issue shows itself when we look at the studies of the most severe epidemics in history and we see that the discrimination against the disabled is terribly entrenched. In fact, it seems that in crisis situations, this way of thinking prevails that the elderly or infirm or suffering from chronic diseases are less worth living than others, and based on this way of thinking, health assistance is denied to them.

While the medical community is happy with the results of this study and the efforts to complete this technology continue with more seriousness, the question is raised, what risks and discriminations will such tools bring to disabled people? If the standardized forecasts of the Decode Center researchers were developed with the view that health care would be allocated first to the more able people because they are supposed to live longer, then the measurement of life span would have a meaning beyond the prediction of death; In this situation, such a tool may hasten the death of the disabled.

According to Alyssa Berghart, a physicist at Stanford University: Death prediction technology doesn’t necessarily have to be a bad thing; Whether it is positive or negative depends entirely on human decision-making. This technology is not as unbiased or accurate as it is believed to be; However, when policymakers assume that predicting the time of death is correct, they may make wrong decisions and provide treatment resources to people who can overcome their illnesses without these resources.

But the same life expectancy data can be used to allocate resources to those most at risk of dying. In fact, the issue of death itself should not be the deciding factor; Rather, according to Matthew Cortland, a lawyer and chief expert at Data For Progress, the question that should be asked with the help of this data is: “What tools and treatments help people survive?” It is not limited, but also includes the allocation of resources outside the hospital; For example, a safe place to live, enough food, and affordable medicines.

Scientists will continue to develop more accurate tools for predicting the time of death, and these tools can be used for good purposes, but critical thinking that assumes that saving the lives of people who live longer is more valuable needs to change. In times of crisis, instead of deciding who is most deserving of treatment, build new hospitals, set up temporary medical tents, or hire retired doctors to make medical care equally available to all. More importantly, the right policies need to be put in place today to support disabled people whose death prediction technology will remove them from the waiting list for medical assistance.

And finally, predicting the time of death may be useful for early diagnosis of disease, but it will never be able to measure the value of life. Of course, there are better ways to measure the value of life than counting the days left, and that’s something each of us should do for ourselves.

Technology

Unveiling of OpenAI new artificial intelligence capabilities

Published

on

By

OpenAI

OpenAI claims that its free GPT-4o model can talk, laugh, sing, and see like a human. The company is also releasing a desktop version of ChatGPT’s large language model.

 Unveiling of OpenAI new artificial intelligence capabilities

Yesterday, OpenAI introduced the GPT-4o artificial intelligence model, which is a completely new model of the company’s artificial intelligence, which according to OpenAI is a step closer to a much more natural human-computer interaction.
This new model accepts any combination of text, audio, and image as input and can produce output in all three formats. It can also detect emotions, allow the user to interrupt it mid-speech, and respond almost as quickly as a human during a conversation.
In the live broadcast of the introduction of this new model, Meera Moratti, Chief Technology Officer of OpenAI, said: “The special thing about GPT-4o is that GPT-4 level intelligence has been made available to everyone, including our free users. This is the first time we’ve taken a big step forward in ease of use.
During the unveiling of the model, OpenAI demonstrated the GPT-4o, which translates live between English and Italian, with its intuitive ability to help a researcher solve a linear equation on paper in an instant, just by listening to The breaths of an OpenAI executive give him advice on deep breathing.
The letter “o” in the name of the GPT-4o model stands for the word “Omni”, which is a reference to the multifaceted capabilities of this model.
OpenAI said that GPT-4o is trained with text, images, and audio, meaning all input and output is processed by a neural network. This differs from the company’s previous models, including the GPT-3.5 and GPT-4, which allowed users to ask questions just by speaking, but then converted the speech to text. This would cause tone and emotion to be lost and interactions to slow down.
OpenAI will make this new model available for free to everyone, including ChatGPT users, over the next few weeks, and will also initially release a desktop version of ChatGPT for Apple computers (Mac) for users who have purchased a subscription, starting today. They will have access to it. The introduction of the new OpenAI model took place one day before the Google I/O event, which is the company’s annual developer conference.
OpenAI
It should be noted that shortly after OpenAI introduced GPT-4o, Google also presented a version of its artificial intelligence known as Gemini with similar capabilities.
While the GPT-4 model excelled at tasks related to image and text analysis, the GPT-4o model integrates speech processing and expands its range of capabilities.

Natural human-computer interaction

According to OpenAI, the GPT-4o model is a step towards a much more natural human-computer interaction that accepts any combination of text, audio, and image as input and produces any combination of text, audio and image.
This model can respond to voice inputs in less than 232 milliseconds, with an average speed of 320 milliseconds, which is similar to the response time of humans in a conversation.
This model matches the performance of the GPT-4 Turbo model on English text and code with a significant improvement in converting text to non-English languages while being much faster and 50% cheaper via application programming interface (API). The GPT-4o model is especially better in visual and audio understanding compared to existing models.

What exactly does the introduction of this model mean for users?

The GPT-4o model significantly enhances the experience of ChatGPT, OpenAI’s wildly popular AI chatbot. Users can now interact with ChatGPT like a personal assistant, ask it questions and even hang it up wherever they want.
Additionally, as mentioned, OpenAI is introducing a desktop version of ChatGPT along with a revamped user interface.
“We recognize the increasing complexity of these models, but our goal is to make the interaction experience more intuitive and seamless,” Moratti emphasized. We want users to focus on working with GPT instead of being distracted by the UI. Our new model can reason text, audio, and video in real-time. This model is versatile, fun to work with, and a step toward a much more natural form of human-computer interaction, and even human-computer-computer interaction.
The GPT-4o model has also been extensively reviewed by more than 70 experts in areas such as social psychology, bias and fairness, and misinformation to identify risks introduced or enhanced by the newly added methods. OpenAI has used these learnings to develop safety interventions to improve the safety of interacting with GPT-4o. The members of the OpenAI team demonstrated their audio skills during the public presentation of this new model. A researcher named Mark Chen emphasized its ability to gauge emotions and noted its adaptability to user interruptions.
Chen demonstrated the model’s versatility by requesting a bedtime story in a variety of tones, from dramatic to robotic, and even had it read to him. As mentioned, this new model is available for free to all ChatGPT users. Until now, GPT-4 class models were only available to people who paid a monthly subscription.
“This is important to us because we want to make great AI tools available to everyone,” said OpenAI CEO Sam Altman.

Strong market for generative artificial intelligence

OpenAI is leading the way in productive AI alongside Microsoft and Google, as companies across sectors rush to integrate AI-powered chatbots into their services to stay competitive.
For example, Anthropic, a competitor of OpenAI, recently unveiled its first corporate proposal to Apple to provide a free program for iPhones.
“We recognize that GPT-4o audio presentations present new risks,” OpenAI said in a statement. Today we’re publicly releasing text and image inputs and text outputs, and in the coming weeks and months, we’ll be working on the technical infrastructure, post-training usability, and security necessary to release other methods. For example, at startup, audio outputs are limited to a set of predefined sounds and adhere to our existing security policies. We will share more details about the full range of GPT-4o methods in a future system.
OpenAI
According to the report, the generative AI market saw a staggering $29.1 billion in investment across nearly 700 deals in 2023, up more than 260 percent from the previous year. Predictions indicate that the yield of this market will exceed one trillion dollars in the next decade. However, there are concerns about the rapid deployment of untested services by academics and ethicists who are troubled by the technology’s potential to perpetuate prejudice.
Since launching in November 2022, ChatGPT’s chatbot has broken records as the fastest-growing user base in history, with nearly 100 million weekly active users. OpenAI reports that more than 92% of the world’s top 500 companies use it.
At the presentation event last night, Moratti answered some questions from the audience and when he spoke in fluent Italian and the artificial intelligence translated his words into English, the hall was filled with excitement.
There is more. This means the next time you take a selfie, OpenAI’s artificial intelligence can assess your exact emotions. All you have to do is select a selfie and ask ChatGPT to tell you how you feel.
It should be said that OpenAI employees were so happy that ChatGPT asked them why they were so happy!

Continue Reading

Technology

Samsung S95B OLED TV review

Published

on

By

Samsung S95B OLED TV
The S95B TV is Samsung’s serious attempt to enter the OLED TV market after a decade of hiatus; But can it take back the OLED throne from LG?

Samsung S95B OLED TV review

What can be placed in a container with a depth of 4 mm? For example, 40 sheets of paper or 5 bank cards; But to think that Samsung has successfully packed a large 4K OLED panel into a depth of less than 4mm that can produce more than 2000 nits of brightness is amazing. Join me as I review the Samsung S95B TV.

Continue Reading

Technology

MacBook Air M3 review; Lovely, powerful and economical

Published

on

By

MacBook Air M3 review
The MacBook Air M3, with all its performance improvements, adds to the value and economic justification of the MacBook Air M1, rather than being an ideal purchase.

MacBook Air M3 review; Lovely, powerful and economical

If you are looking for a compact, well-made and high-quality laptop that can be used in daily and light use, the MacBook Air M3 review is not for you; So close the preceding article, visit the Zomit products section and choose one of the stores to buy MacBook Air M1 ; But if you, like me, are excited to read about the developments in the world of hardware and are curious to know about the performance of the M3 chip in the Dell MacBook Air 2024 , then stay with Zoomit.

Continue Reading

Popular