Connect with us

AI

San Francisco could use killer remote-controlled robots

Published

on

The San Francisco Police Department can now use remote-controlled robots to kill criminals if the situation is acute and requires immediate action.

The San Francisco Police Department can now use remote-controlled robots to kill criminals if the situation is acute and requires immediate action.

San Francisco police have received permits to use remote-controlled robots to kill criminals, The Verge reports. Members of the San Francisco City Board of Supervisors have approved a controversial policy allowing police robots to be used as a lethal force when the public or police officers are in imminent danger. Provided that the use of robots is preferable to any other strike strategy.

The San Francisco Police Department says it has no pre-armed robots in its fleet and doesn’t want to equip its devices with firearms. The robots can be equipped with explosives to engage and incapacitate or distract violent or armed or dangerous suspects, a San Francisco police official said. It is said that these robots will only be used when the situation is acute, and the police want to save innocent people or prevent more casualties.

The San Francisco Police Department can now use remote-controlled robots to kill criminals if the situation is acute and requires immediate action.

The San Francisco Police Department currently has 17 robots, 12 of which are operational. These robots can often be divided into large and medium tracking robots (such as the Remotec F6A and Qinetiq Talon) and small robots (such as the iRobot FirstLook and the Recon Robotics Throwbot). All these robots are controlled remotely.

The first category can be used to investigate or detonate explosives. Small robots of the second category are sent to the target areas to carry out the process of monitoring and identifying suspicious subjects. All robots owned by the San Francisco Police Department are mostly controlled by humans and have limited autonomous capabilities.

Police departments in the United States have already used robots to kill criminals. It is said that the first incident related to this story happened in 2016. That year, the Dallas Police Department used a bomb disposal robot to kill a sniper who shot and killed five officers. A group of people supported the Dallas police for ending a tragic incident, and others criticized using a robot without using alternatives.

San Francisco legislators are said to have debated for two hours on the passage of the law on the use of killer robots, and the plan ended up with eight votes in favour and three against. One of the supporters of this project says that opponents of ​​using robots to kill criminals may appear to the public as anti-police people. Meanwhile, the San Francisco Board of Supervisors chairman, who voted against the plan, said he was not anti-police and considered himself “a supporter of people of colour.”

Several police departments in different parts of America were against using robots technology to kill criminals. The Oakland Police Department initially gave the green light to this plan. Still, in the end, he gave up his initial decision without any explanation. Many civil rights groups have protested the approval of San Francisco’s new plan.

via:Theverge

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

AI

Artificial intelligence builds cities better than humans

Published

on

By

Artificial intelligence builds cities better than humans

Artificial intelligence builds cities better than humans. AI study programs can design better cities than humans.

Artificial intelligence builds cities better than humans

Imagine living in a green and cool city, full of parks and walking paths, bike paths, and dedicated bus routes that take people to shops, schools, and services in minutes.

This dream is the embodiment of urban planning and in a way the definition of utopia or the concept of a 15-minute city, where all the basic needs and services are available in a quarter of an hour, public health is a priority, and there is no excessive pollution from cars.

Artificial intelligence can help urban planners better understand this insight.

Read More: Artificial intelligence smells better than humans

A new study from researchers at Tsinghua University in China shows how machine learning technology can create more efficient spatial layouts than humans in a fraction of the time.

Automation scientist Yu Zheng and his colleagues wanted to find new solutions to improve our rapidly congested cities.

They developed an artificial intelligence system to tackle the most tedious and computationally demanding urban planning tasks and found that this system produced optimal urban maps that performed about 50% better than human designs in the three criteria of access to services and green spaces and traffic levels. he does.

Starting at a small scale, Zheng and his colleagues instructed their model to map urban areas only a few square kilometers in size.

After two days of training and using multiple neural networks, the AI system sought the ideal road layout and land use to match the 15-minute city concept and local planning policies and needs.

While Zheng and his colleagues’ AI model has some features for planning larger urban areas, the overall design of a city would be infinitely more complex.

Automating urban design and planning processes can significantly save time, researchers say. For example, this AI model calculates certain tasks in seconds, whereas it would take human programmers between 50 and 100 minutes.

According to the researchers, automating the most time-consuming urban planning tasks frees up planners’ time to focus on more challenging or human-centered tasks such as public engagement and aesthetics.

Instead of AI replacing people, Zheng and his colleagues define an AI system as an urban planning assistant that can create conceptual designs that are optimized by algorithms and reviewed, adjusted, and evaluated by human experts based on community feedback.

Commenting on the study, MIT research scientist Paolo Santi wrote: This last step is critical to a good design. Urban planning is not just allocating space to buildings, parks, etc., but designing a place where urban communities can live, work, interact, and hopefully thrive for a long time to come.

By comparing their human-AI workflow with human-specific designs, Zheng and colleagues found that this collaborative process could increase access to basic services and parks by 12 and 5 percent, respectively.

The researchers also surveyed 100 urban designers who did not know whether the designs they were asked to choose from were created by human planners or artificial intelligence. AI received significant votes for some of its space projects, but for others, there was no clear preference among survey participants.

The real test, of course, will be in the communities on which these plans are built, measured by the reductions in noise, heat, and pollution and improvements in public health that urban planning promises.

This study was published in the journal Nature Computational Science.

Continue Reading

AI

Artificial intelligence smells better than humans

Published

on

By

Artificial intelligence smells better than humans

Artificial intelligence smells better than humans. A new artificial intelligence system can analyze odors better than humans, and researchers say it outperformed humans in 53 percent of 400 compounds tested.

Artificial intelligence smells better than humans

When it comes to neuroscience, an important aspect is understanding how our senses translate light into sight, music into hearing, food into taste, and texture into touch. However, information about the sensory relationships of smell has puzzled researchers for a long time.

Humans find the smell of flowers pleasant and the smell of rotten food annoying due to the presence of proteins in the nose called odor receptors. However, little is known about how these receptors absorb chemicals and convert them into aromas.

Read More: The world’s first dental robot started working

To understand this phenomenon, researchers from the Monell Center for Chemical Senses and Osmo, a startup based in Cambridge, Massachusetts, investigated the relationship between the brain’s olfactory perception system and chemicals in the air.

This research led the scientists to develop a machine-learning model that can now verbally describe the smell of compounds with human-level skill.

The details of this study have been published in the journal “Science”.

Extensive effort

There are about 400 active olfactory receptors in humans. These olfactory nerve proteins interact with chemicals in the air to send a signal electrically to the olfactory bulb.

According to these researchers, the number of olfactory receptors is much greater than the four receptors used for color vision or the 40 receptors used for taste.

Joel Maineland, one of the study’s senior authors and a member of the Monell Center, said in a statement: “In olfaction research, the question of what physical properties make the brain perceive the smell of molecules in the air remains a mystery.” So our group worked to understand the relationship between how molecules form and how we perceive their smell. The research group has developed a model that can learn to associate descriptions of a molecule’s odor with the molecular structure of the odor.

A commercial dataset containing the molecular composition and olfactory properties of 5,000 known odorants was used to train the system. The shape of a molecule serves as input to an algorithm that predicts which words can best describe the molecule’s aroma. In addition, to ensure the effectiveness of the model, the researchers performed a blind validation procedure in which a group of trained research participants described the new molecules and then compared their answers to the AI descriptions.

Fifteen participants were each given 400 odorants and instructed to use a set of 55 words—from minty to musty—to describe each molecule.

Impressive results

Finally, it was observed that the artificial intelligence model performs 53% better than humans in describing scents.

The model even performed well on olfactory features for which it was not trained. “It was surprising that we never trained it to learn and describe the strength of smell, but it could still make accurate predictions,” Mainland says.

The model was able to measure a wide range of odor properties, including odor intensity, for 500,000 odor molecules and find hundreds of pairs of structurally different compounds that had similar odors.

“We hope this map will be useful to researchers in chemistry, olfactory neuroscience, and psychophysics as a new tool to investigate the nature of the sense of smell,” says Mainland.

The researchers think that the map emerging from this AI model can also be adjusted based on metabolism, which represents a significant change in the way scientists perceive scents.

In other words, odors that are perceptually similar to each other are likely to share the same metabolic pathway. Now, scientists classify compounds like chemists. For example, by asking whether a molecule has an ester or an aromatic ring.

According to the researchers, this study helps bring the world closer to digitizing smells to record and reproduce them. It could also identify new scents for the fragrance industry, which could not only reduce dependence on endangered plants but also identify new functional fragrances for uses such as mosquito repellent or masking bad smells.

The group then wants to figure out how the smells combine with each other to produce a scent that the human brain perceives as distinctly different from any other scent.

Continue Reading

AI

Artificial intelligence is on the verge of explosion

Published

on

By

artificial intelligence

Artificial intelligence is on the verge of explosion. One of the pioneering leaders in the field of artificial intelligence development, Emad Mostaque, the CEO of Stability AI, claims that this industry is at the beginning of its journey and is about to explode.

Artificial intelligence is on the verge of explosion

During a virtual panel discussion held by Swiss investment bank UBS, Emad Mostaque, CEO of Stability AI, claimed that despite its growing popularity, artificial intelligence is still in its infancy.

He said: [compared to the development of smartphones] we are still at the point of introducing iPhone 2G and 3G. I think next year will be the year [artificial intelligence] takes off.

Read More: The world’s first dental robot started working

Muhammad Emad Mostaque was born in April 1983 in a Bengali Muslim family in Jordan a month after his birth he was taken to Dhaka in Bangladesh and he immigrated to Britain with his family at the age of seven.

In his twenties, he became interested in helping the Muslim world by creating online forums for Muslim communities and developing “Islamic Artificial Intelligence” that would help guide people on their religious journey. Mastak received his master’s degree in mathematics and computer science from the University of Oxford in 2005.

In 2020, he founded the artificial intelligence company Stability AI, which is an imaging company valued at one billion dollars. In recent years, he has been recognized as one of the most influential leaders in the artificial intelligence space.

Mostaque was also one of the experts who signed the famous letter requesting a 6-month freeze on artificial intelligence development along with Elon Musk and several other commentators and has since been very vocal with his opinions on the developing technology.

However, Mostaque is not the only one who believes that artificial intelligence is still in its early days. Michael Briest, head of European technology research at UBS, also claimed that only about 6 percent of earnings statements this year mentioned AI, while about a quarter of companies in the software sector took advantage of it.

If we want to accept Mostaque’s word, this number will increase significantly. He estimates that 50 percent of all CEOs will mention artificial intelligence by next year.

He points to the fact that so far systems such as ChatGPT, Microsoft’s integration of artificial intelligence into the Bing search engine, Google’s introduction of the Bard artificial intelligence chatbot, as well as a text-to-image generation by the artificial intelligence of Stability AI have focused on consumers.

According to Mostaque, when artificial intelligence moves past the consumer, we will see significant growth in its development. “It’s like the calm before the storm because the models aren’t quite ready yet,” he said.

When artificial intelligence finally takes hold at the enterprise level, it will spark intense competition among competing companies in sectors far beyond Silicon Valley. This in turn increases the penetration of artificial intelligence beyond what we have experienced before.

“When your competitors start implementing it, you have to work with it because of the increased productivity and because of the competitive pressure,” said Mostaque. In addition, training artificial intelligence models does not take much time considering the huge amount of data already provided to companies.

He added: “You just have to have and use the right models in the right way to get results that increase productivity.” Emad Mostaque even went so far as to warn that those who ignore the AI revolution will be punished.

He said: You will see that the market will punish those who do not use this [artificial intelligence].

*** AI stands for Artificial Intelligence.

Continue Reading

Popular