Tuesday, March 18, 2025
Home Blog Page 1397

Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning

0
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning


Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
John Hopfield and Geoffrey Hinton, seen in picture, are awarded this year’s Nobel Prize in Physics, which is announced at a press conference by Hans Ellergren, center, permanent secretary at the Swedish Academy of Sciences in Stockholm, Sweden Tuesday Oct. 8, 2024. Credit: Christine Olsson/TT News Agency via AP

John Hopfield and Geoffrey Hinton—who is known as the Godfather of artificial intelligence—were awarded the Nobel Prize in physics Tuesday for discoveries and inventions that formed the building blocks of machine learning and artificial intelligence.

“This year’s two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” the Nobel committee said in a press release.

Hopfield’s research is carried out at Princeton University and Hinton works at the University of Toronto.

Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates “used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets.”

She said that such networks have been used to advance research in physics and “have also become part of our daily lives, for instance in facial recognition and language translation.”

While the committee honored the science behind machine learning and artificial intelligence, Moons also mentioned its flipside, saying that “while machine learning has enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind.”

Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create.

Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
Artificial intelligence pioneer Geoffrey Hinton speaks at the Collision Conference in Toronto, Wednesday, June 19, 2024. Credit: Chris Young/The Canadian Press via AP, File

On Tuesday, he said he was shocked at the honor.

“I’m flabbergasted. I had, no idea this would happen,” he said when reached by the Nobel committee on the phone.

Hinton said he continues to worry “about a number of possible bad consequences” of his machine learning work, “particularly the threat of these things getting out of control,” but still would do it all over again.

Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer.

Nobel Prize in physics awarded to 2 scientists for discoveries that enable machine learning
Computer scientist Geoffrey Hinton poses at Google’s Mountain View, Calif, headquarters on Wednesday, March 25, 2015. Credit: AP Photo/Noah Berger, File

The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award’s creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel’s death.

Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.






Watch the 2024 Nobel Prize announcement

Nobel committee announcement:

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to

John J. Hopfield, Princeton University, NJ, U.S.

Geoffrey E. Hinton, University of Toronto, Canada

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through connections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin—a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

The Nobel Prize in Physics 2024

This year’s laureates used tools from physics to construct methods that helped lay the foundation for today’s powerful machine learning. John Hopfield created a structure that can store and reconstruct information. Geoffrey Hinton invented a method that can independently discover properties in data and which has become important for the large artificial neural networks now in use.

They used physics to find patterns in information

Many people have experienced how computers can translate between languages, interpret images and even conduct reasonable conversations. What is perhaps less well known is that this type of technology has long been important for research, including the sorting and analysis of vast amounts of data. The development of machine learning has exploded over the past fifteen to twenty years and utilises a structure called an artificial neural network. Nowadays, when we talk about artificial intelligence, this is often the type of technology we mean.

Although computers cannot think, machines can now mimic functions such as memory and learning. This year’s laureates in physics have helped make this possible. Using fundamental concepts and methods from physics, they have developed technologies that use structures in networks to process information.

Machine learning differs from traditional software, which works like a type of recipe. The software receives data, which is processed according to a clear description and produces the results, much like when someone collects ingredients and processes them by following a recipe, producing a cake. Instead of this, in machine learning the computer learns by example, enabling it to tackle problems that are too vague and complicated to be managed by step by step instructions. One example is interpreting a picture to identify the objects in it.

Mimics the brain

An artificial neural network processes information using the entire network structure. The inspiration initially came from the desire to understand how the brain works. In the 1940s, researchers had started to reason around the mathematics that underlies the brain’s network of neurons and synapses. Another piece of the puzzle came from psychology, thanks to neuroscientist Donald Hebb’s hypothesis about how learning occurs because connections between neurons are reinforced when they work together.

Later, these ideas were followed by attempts to recreate how the brain’s network functions by building artificial neural networks as computer simulations. In these, the brain’s neurons are mimicked by nodes that are given different values, and the synapses are represented by connections between the nodes that can be made stronger or weaker. Donald Hebb’s hypothesis is still used as one of the basic rules for updating artificial networks through a process called training.

At the end of the 1960s, some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use. However, interest in artificial neural networks was reawakened in the 1980s, when several important ideas made an impact, including work by this year’s laureates.

Associative memory

Imagine that you are trying to remember a fairly unusual word that you rarely use, such as one for that sloping floor often found in cinemas and lecture halls. You search your memory. It’s something like ramp… perhaps rad…ial? No, not that. Rake, that’s it!

This process of searching through similar words to find the right one is reminiscent of the associative memory that the physicist John Hopfield discovered in 1982. The Hopfield network can store patterns and has a method for recreating them. When the network is given an incomplete or slightly distorted pattern, the method can find the stored pattern that is most similar.

Hopfield had previously used his background in physics to explore theoretical problems in molecular biology. When he was invited to a meeting about neuroscience he encountered research into the structure of the brain. He was fascinated by what he learned and started to think about the dynamics of simple neural networks. When neurons act together, they can give rise to new and powerful characteristics that are not apparent to someone who only looks at the network’s separate components.

In 1980, Hopfield left his position at Princeton University, where his research interests had taken him outside the areas in which his colleagues in physics worked, and moved across the continent. He had accepted the offer of a professorship in chemistry and biology at Caltech (California Institute of Technology) in Pasadena, southern California. There, he had access to computer resources that he could use for free experimentation and to develop his ideas about neural networks.

However, he did not abandon his foundation in physics, where he found inspiration for his understanding of how systems with many small components that work together can give rise to new and interesting phenomena. He particularly benefitted from having learned about magnetic materials that have special characteristics thanks to their atomic spin—a property that makes each atom a tiny magnet. The spins of neighbouring atoms affect each other; this can allow domains to form with spin in the same direction. He was able to make a model network with nodes and connections by using the physics that describes how materials develop when spins influence each other.

The network saves images in a landscape

The network that Hopfield built has nodes that are all joined together via connections of different strengths. Each node can store an individual value—in Hopfield’s first work this could either be 0 or 1, like the pixels in a black and white picture.

Hopfield described the overall state of the network with a property that is equivalent to the energy in the spin system found in physics; the energy is calculated using a formula that uses all the values of the nodes and all the strengths of the connections between them. The Hopfield network is programmed by an image being fed to the nodes, which are given the value of black (0) or white (1). The network’s connections are then adjusted using the energy formula, so that the saved image gets low energy. When another pattern is fed into the network, there is a rule for going through the nodes one by one and checking whether the network has lower energy if the value of that node is changed. If it turns out that energy is reduced if a black pixel is white instead, it changes colour. This procedure continues until it is impossible to find any further improvements. When this point is reached, the network has often reproduced the original image on which it was trained.

This may not appear so remarkable if you only save one pattern. Perhaps you are wondering why you don’t just save the image itself and compare it to another image being tested, but Hopfield’s method is special because several pictures can be saved at the same time and the network can usually differentiate between them.

Hopfield likened searching the network for a saved state to rolling a ball through a landscape of peaks and valleys, with friction that slows its movement. If the ball is dropped in a particular location, it will roll into the nearest valley and stop there. If the network is given a pattern that is close to one of the saved patterns it will, in the same way, keep moving forward until it ends up at the bottom of a valley in the energy landscape, thus finding the closest pattern in its memory.

The Hopfield network can be used to recreate data that contains noise or which has been partially erased.

Hopfield and others have continued to develop the details of how the Hopfield network functions, including nodes that can store any value, not just zero or one. If you think about nodes as pixels in a picture, they can have different colours, not just black or white. Improved methods have made it possible to save more pictures and to differentiate between them even when they are quite similar. It is just as possible to identify or reconstruct any information at all, provided it is built from many data points.

Classification using nineteenth-century physics

Remembering an image is one thing, but interpreting what it depicts requires a little more.

Even very young children can point at different animals and confidently say whether it is a dog, a cat, or a squirrel. They might get it wrong occasionally, but fairly soon they are correct almost all the time. A child can learn this even without seeing any diagrams or explanations of concepts such as species or mammal. After encountering a few examples of each type of animal, the different categories fall into place in the child’s head. People learn to recognise a cat, or understand a word, or enter a room and notice that something has changed, by experiencing the environment around them.

When Hopfield published his article on associative memory, Geoffrey Hinton was working at Carnegie Mellon University in Pittsburgh, U.S.. He had previously studied experimental psychology and artificial intelligence in England and Scotland and was wondering whether machines could learn to process patterns in a similar way to humans, finding their own categories for sorting and interpreting information. Along with his colleague, Terrence Sejnowski, Hinton started from the Hopfield network and expanded it to build something new, using ideas from statistical physics.

Statistical physics describes systems that are composed of many similar elements, such as molecules in a gas. It is difficult, or impossible, to track all the separate molecules in the gas, but it is possible to consider them collectively to determine the gas’ overarching properties like pressure or temperature. There are many potential ways for gas molecules to spread through its volume at individual speeds and still result in the same collective properties.

The states in which the individual components can jointly exist can be analysed using statistical physics, and the probability of them occurring calculated. Some states are more probable than others; this depends on the amount of available energy, which is described in an equation by the nineteenth-century physicist Ludwig Boltzmann. Hinton’s network utilised that equation, and the method was published in 1985 under the striking name of the Boltzmann machine.

Recognising new examples of the same type

The Boltzmann machine is commonly used with two different types of nodes. Information is fed to one group, which are called visible nodes. The other nodes form a hidden layer. The hidden nodes’ values and connections also contribute to the energy of the network as a whole.

The machine is run by applying a rule for updating the values of the nodes one at a time. Eventually the machine will enter a state in which the nodes’ pattern can change, but the properties of the network as a whole remain the same. Each possible pattern will then have a specific probability that is determined by the network’s energy according to Boltzmann’s equation. When the machine stops it has created a new pattern, which makes the Boltzmann machine an early example of a generative model.

The Boltzmann machine can learn—not from instructions, but from being given examples. It is trained by updating the values in the network’s connections so that the example patterns, which were fed to the visible nodes when it was trained, have the highest possible probability of occurring when the machine is run. If the same pattern is repeated several times during this training, the probability for this pattern is even higher. Training also affects the probability of outputting new patterns that resemble the examples on which the machine was trained.

A trained Boltzmann machine can recognise familiar traits in information it has not previously seen. Imagine meeting a friend’s sibling, and you can immediately see that they must be related. In a similar way, the Boltzmann machine can recognise an entirely new example if it belongs to a category found in the training material, and differentiate it from material that is dissimilar.

In its original form, the Boltzmann machine is fairly inefficient and takes a long time to find solutions. Things become more interesting when it is developed in various ways, which Hinton has continued to explore. Later versions have been thinned out, as the connections between some of the units have been removed. It turns out that this may make the machine more efficient.

During the 1990s, many researchers lost interest in artificial neural networks, but Hinton was one of those who continued to work in the field. He also helped start the new explosion of exciting results; in 2006 he and his colleagues Simon Osindero, Yee Whye Teh and Ruslan Salakhutdinov developed a method for pretraining a network with a series of Boltzmann machines in layers, one on top of the other. This pretraining gave the connections in the network a better starting point, which optimised its training to recognise elements in pictures.

The Boltzmann machine is often used as part of a larger network. For example, it can be used to recommend films or television series based on the viewer’s preferences.

Machine learning—today and tomorrow

Thanks to their work from the 1980s and onward, John Hopfield and Geoffrey Hinton have helped lay the foundation for the machine learning revolution that started around 2010.

The development we are now witnessing has been made possible through access to the vast amounts of data that can be used to train networks, and through the enormous increase in computing power. Today’s artificial neural networks are often enormous and constructed from many layers. These are called deep neural networks and the way they are trained is called deep learning.

A quick glance at Hopfield’s article on associative memory, from 1982, provides some perspective on this development. In it, he used a network with 30 nodes. If all the nodes are connected to each other, there are 435 connections. The nodes have their values, the connections have different strengths and, in total, there are fewer than 500 parameters to keep track of. He also tried a network with 100 nodes, but this was too complicated, given the computer he was using at the time. We can compare this to the large language models of today, which are built as networks that can contain more than one trillion parameters (one million millions).

Many researchers are now developing machine learning’s areas of application. Which will be the most viable remains to be seen, while there is also wide-ranging discussion on the ethical issues that surround the development and use of this technology.

Because physics has contributed tools for the development of machine learning, it is interesting to see how physics, as a research field, is also benefitting from artificial neural networks. Machine learning has long been used in areas we may be familiar with from previous Nobel Prizes in Physics. These include the use of machine learning to sift through and process the vast amounts of data necessary to discover the Higgs particle. Other applications include reducing noise in measurements of the gravitational waves from colliding black holes, or the search for exoplanets.

In recent years, this technology has also begun to be used when calculating and predicting the properties of molecules and materials—such as calculating protein molecules’ structure, which determines their function, or working out which new versions of a material may have the best properties for use in more efficient solar cells.

More information:
www.nobelprize.org/prizes/phys … dvanced-information/

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Nobel Prize in physics awarded to 2 scientists for discoveries that enabled machine learning (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-nobel-prize-physics-awarded-discoveries.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Federal judge orders Google to open its Android app store to competition

0
Federal judge orders Google to open its Android app store to competition


Federal judge orders Google to open its Android app store to competition
A sign is shown on a Google building at their campus in Mountain View, Calif., on Sept. 24, 2019. Credit: AP Photo/Jeff Chiu, File

A federal judge on Monday ordered Google to tear down the digital walls shielding its Android app store from competition as punishment for maintaining an illegal monopoly that helped expand the company’s internet empire.

The injunction issued by U.S. District Judge James Donato will require Google to make several changes that the Mountain View, California, company had been resisting. Those include a provision that will require its Play Store for Android apps to distribute rival third-party app stores so consumers can download them to their phones, if they so desire.

The judge’s order will also make the millions of Android apps in the Play Store library accessible to rivals, allowing them to offer up a competitive selection.

Donato is giving Google until November to make the revisions dictated in his order. The company had insisted it would take 12 to 16 months to design the safeguards needed to reduce the chances of potentially malicious software making its way into rival Android app stores and infecting millions of Samsung phones and other mobile devices running on its free Android software.

The court-mandated overhaul is meant to prevent Google from walling off competition in the Android app market as part of an effort to protect a commission system that has been a boon for one of the world’s most prosperous companies and helped elevate the market value of its corporate parent Alphabet Inc. to $2 trillion.

Google said in a blog post that it will ask the court to pause the pending changes, and will appeal the court’s decision.

Donato also ruled that, for a period of three years ending Nov. 1, 2027, Google won’t be able to share revenue from its Play Store with anyone who distributes Android apps or is considering launching an Android app distribution platform or store. It also won’t be allowed to pay developers, or share revenue, so that they will launch an app in the Google Play Store first or exclusively, and can’t make deals with manufacturers to preinstall the Google Play store on any specific location on an Android device. It also won’t be able to require apps to use its billing system or tell customers that they can download apps elsewhere and potentially for cheaper.

The Play Store has been earning billions of dollars annually for years, primarily through 15% to 30% commissions that Google has been imposing on digital transactions completed within Android apps. It’s a similar fee structure to the one that Apple deploys in its iPhone app store—a structure that prompted video game maker Epic Games to file antitrust lawsuits four years ago in an effort to foster competition that could help drive down prices for both app makers and consumers.

A federal judge mostly sided with Apple in a September 2021 decision that was upheld by an appeals court. Still, a jury favored Epic Games after the completion of a four-week trial completed last year and delivered a verdict that tarred the Play Store as an illegal monopoly.

That prompted another round of hearings this year to help Donato determine what steps should be taken to restore fair competition. Google argued that Epic Games was seeking some extreme changes, saddling the company with costs that could run as high as $600 billion. Epic contended Google could level the playing field for as little as $1 million. It’s unclear how much the changes ordered by Donato will cost Google.

Although Epic lost its antitrust case against Apple, Donato’s ruling could still have ripple effects on the iPhone app store as another federal judge weighs whether Apple is making it easy enough to promote different ways that consumers can pay for digital transactions. Apple was ordered to allow in-app links to alternative payment systems as part of U.S. District Judge Yvonne Gonzalez Rogers’ decision in that case, but Epic contends the provision is being undermined with the creation of another commission system that stifles consumer choice.

The forthcoming Play Store shakeup could be just the first unwelcome shock that antitrust law delivers to Google. In the biggest antitrust case brought by the U.S. Justice Department in a quarter century, U.S. District Judge Amit Mehta in August declared Google’s dominant search engine to be an illegal monopoly, too, and is now getting ready to start hearings on how to punish Google for that bad behavior. Google is appealing Mehta’s ruling in the search engine case in hopes of warding off a penalty that could hurt its business even more than the changes being ordered in the Play Store.

“Provided the ruling survives the appeals process, Google will almost certainly take a revenue hit,” said Emarketer analyst Evelyn Mitchell-Wolf. “No doubt some of the largest app developers like Epic Games will start encroaching on Google Play Store’s market share, meaning Google will lose out on its usual cut of subscription and in-app purchases.”

The analyst added that, while the Google Play Store will likely continue to benefit from brand recognition since it was the default Android app store for so long, “some consumers may defect if they can get better deals on their favorite apps elsewhere.” And app developers will likely take advantage of the opportunity to let consumers know about direct downloads.

“So Google may see fewer Play Store revenues even among the Android users that stick to the default,” Mitchell-Wolf said.

Alphabet’s shares fell $4.08, or 2.4%, to close Monday at $162.98.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
Federal judge orders Google to open its Android app store to competition (2024, October 8)
retrieved 8 October 2024
from https://techxplore.com/news/2024-10-federal-google-android-app-competition.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Taiwan’s Foxconn says building world’s largest ‘superchip’ plant

0
Taiwan’s Foxconn says building world’s largest ‘superchip’ plant


Taiwanese tech giant Foxconn says it is building the world's largest production plant for Nvidia's GB200 'superchips'
Taiwanese tech giant Foxconn says it is building the world’s largest production plant for Nvidia’s GB200 ‘superchips’

Taiwanese tech giant Foxconn said on Tuesday it is building the world’s largest production plant for US hardware leader Nvidia’s GB200 “superchips” that power artificial intelligence servers.

Foxconn, also known by its official name Hon Hai Precision Industry, is the world’s biggest contract electronics manufacturer and assembles devices for major tech companies, including Apple.

Ambitious to expand beyond electronics assembly, it has been pushing into areas ranging from electric vehicles to semiconductors and servers.

“We’re building the largest GB200 production facility on the planet,” senior executive Benjamin Ting said at the company’s annual “Hon Hai Tech Day”.

“I don’t think I can say where now, but it’s the largest on the planet,” said Ting, Foxconn’s senior vice president for the cloud enterprise solutions business.

Chairman Young Liu said while opening the two-day event that Foxconn would be “the first to ship these superchips”.

Liu later told reporters the new plant was in Mexico.

Unlike its rivals Intel, Micron and Texas Instruments, Nvidia does not manufacture its own chips but uses subcontractors.

Foxconn also unveiled new electric vehicle prototypes at the tech day—a seven-seater lifestyle multipurpose utility vehicle and a 21-seater bus.

It plans to do with electric vehicles what it did for gadgets—become a go-to contract builder.

Foxconn announced last year that it would team up with Nvidia to create “AI factories”—powerful data-processing centers that would drive the production of next-generation products.

© 2024 AFP

Citation:
Taiwan’s Foxconn says building world’s largest ‘superchip’ plant (2024, October 8)
retrieved 8 October 2024
from https://techxplore.com/news/2024-10-taiwan-foxconn-world-largest-superchip.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

The Nobel Prize in physics is being awarded, a day after 2 Americans won the medicine prize

0
The Nobel Prize in physics is being awarded, a day after 2 Americans won the medicine prize


The Nobel Prize in physics is being awarded, a day after 2 Americans won the medicine prize
A bust of Alfred Nobel on display following a press conference at the Karolinska Institute in Stockholm, Sweden, on Monday, Oct. 3, 2022. Credit: Henrik Montgomery/TT News Agency via AP, File

The Nobel Prize in physics is being awarded Tuesday, a day after two American scientists won the medicine prize for their discovery of microRNA.

Three scientists won last year’s physics Nobel for providing the first split-second glimpse into the superfast world of spinning electrons, a field that could one day lead to better electronics or disease diagnoses.

The 2023 award went to French-Swedish physicist Anne L’Huillier, French scientist Pierre Agostini and Hungarian-born Ferenc Krausz for their work with the tiny part of each atom that races around the center and is fundamental to virtually everything: chemistry, physics, our bodies and our gadgets.

Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it.

If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer.

The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award’s creator, Swedish inventor Alfred Nobel. It has been awarded 117 times. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel’s death.

Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Citation:
The Nobel Prize in physics is being awarded, a day after 2 Americans won the medicine prize (2024, October 8)
retrieved 8 October 2024
from https://phys.org/news/2024-10-nobel-prize-physics-awarded-day.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Samsung third-quarter forecast misses expectations

0
Samsung third-quarter forecast misses expectations


samsung
Credit: Unsplash/CC0 Public Domain

Samsung Electronics said Tuesday it expected third-quarter profits to jump almost three-fold, but fell short of market expectations as it struggled to leverage robust demand for chips used in artificial intelligence servers.

The firm is the flagship subsidiary of South Korean giant Samsung Group, by far the largest of the family-controlled conglomerates that dominate business in Asia’s fourth-largest economy.

The tech giant said in a regulatory filing that its July-September operating profits were expected to rise to 9.1 trillion won ($6.8 billion), up 274.5 percent from a year earlier.

But that was almost 12 percent lower than the average estimate, according to South Korea’s Yonhap News Agency, which cited its own financial data firm.

It is also down nearly 13 percent from the firm’s operating profit of 10.44 trillion won in the previous quarter.

Sales, meanwhile, were seen increasing 17.2 percent on-year to 79 trillion won.

Samsung’s management released a rare, separate statement on Tuesday to its customers, investors and employees, offering its apologies.

“Due to results that fell short of market expectations, concerns have arisen about our fundamental technological competitiveness and the future of the company,” said the statement, which was signed by Jun Young-hyun, the vice chairman of the company’s device solutions division.

“Many people are talking about Samsung’s crisis… We will make sure that the serious situation we are currently facing becomes an opportunity for a fresh start.”

Shares in Samsung fell 1.5 percent in Seoul on Tuesday.

Jene Park, a senior analyst at Counterpoint Research, said that there had been “an expected decline” in Samsung’s memory sector.

“The downturn is attributed to delays in the supply of fifth-generation HBM (HBM3E) and a general reduction in memory demand,” Park told AFP.

“In the smartphone business, sales of new foldable devices seem to be below expectations, as competition among foldable suppliers is becoming more intense.”

Joanne Chiao, an analyst at Taipei-based research group TrendForce told AFP that in the foundry sector “the momentum for component stockpiling enters the off-season, capacity utilization rates at various foundries are generally flat or slightly declining”.

Samsung is expected to release its final earnings report at the end of this month.

The firm said last week it was planning to cut jobs in some of its Asian operations, describing the move as “routine workforce adjustments”.

Bloomberg reported that the layoffs could affect about 10 percent of the workforce in those markets.

© 2024 AFP

Citation:
Samsung third-quarter forecast misses expectations (2024, October 8)
retrieved 8 October 2024
from https://techxplore.com/news/2024-10-samsung-quarter.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link