Because, when she encounters stiff resistance, she's willing to reach across the aisle.
in North Texas was 100 degrees.
We broke that record by noon today.
The final high today? 110 degrees.
That's a heat index of 118, and a wet bulb temperature of 84.
Inching closer to fatal conditions...
Lightly edited for brevity:
It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.
Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. ...the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.
Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sébastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.
The top unsettling quote of the article comes from a cognitive scientist and AI researcher who says that the emergent abilities of Large Language Models are indirect evidence that we are probably not that far off from Artificial General Intelligence. (If you've read Nick Bostrom's book, this will scare you because he his posits that the transition from AGI to Superintelligence will occur as an uncontrollable "explosion".)
Check out his sign. LOL!
The new primary calendar ratified by the Democratic National Committee at Biden's request downgraded Iowa and New Hampshire from their longstanding positions as the first electoral hoorahs of the primary season. As a result, candidates are prohibited from campaigning or even adding their names to the ballot in any state that refuses to adhere to the calendar.
Both Iowa and New Hampshire, who have long enjoyed outsized influence as early-voting states, plan to buck the DNC rules and hold their contests early anyway.
That means that if Biden intends to follow his own rules, he would have to forfeit the two contests, clearing the way for Democratic challengers like Marianne Williamson, a self-help author, or Robert Kennedy Jr., an environmental lawyer best known these days for his anti-vaccine stance. Both have said they would accept the DNC's penalties for campaigning out of turn, NBC News reported.
This is meaningless in the actual race for the nomination, BUT the refusal of Iowa and New Hampshire to follow the new primary schedule will allow the enemy to put a bad spin on the start of the president's reelection campaign.
Personally, I don't want to see RFK, jr. or Williamson win any delegates. The Democratic party doesn't need the woo woo anti-vaccination associations that would bring.
Facebook parent company Meta had suspended Trump after the January 6 insurrection, citing his praise of violent rioters that sought to overturn the 2020 election at his behest. At the time, the company said his actions constituted a "severe violation of our rules."
The company said this January, two years after the insurrection, that it was lifting the ban, stating that "the public should be able to hear what politicians are saying so they can make informed choices."
(Political science professor Alison) Dagnes said that being the first former president to be indicted would "crank up" his messaging and his ability to fundraise effectively by encouraging people to donate to his "legal defense fund" because "they're out to get me." And now, he can do that on Facebook, the platform that worked so well for him before.
Thanks, Zuckerberg, for throwing gas on the fire that threatens to incinerate our democracy.
Brain organoids are a type of lab-grown cell-culture. Even though brain organoids arent mini brains, they share key aspects of brain function and structure such as neurons and other brain cells that are essential for cognitive functions like learning and memory. Also, whereas most cell cultures are flat, organoids have a three-dimensional structure. This increases the culture's cell density 1,000-fold, meaning that neurons can form many more connections.
Creating human brain organoids that can learn, remember, and interact with their environment raises complex ethical questions. For example, could they develop consciousness, even in a rudimentary form? Could they experience pain or suffering? And what rights would people have concerning brain organoids made from their cells?
Even though Organoid Intelligence is still in its infancy, a recently-published study by one of the articles co-authors Dr Brett Kagan of the Cortical Labs provides proof of concept. His team showed that a normal, flat brain cell culture can learn to play the video game Pong.
As far as the ethics of this go, we don't know what human consciousness is, how it arises, or what the biological threshold is for it to do so.
How do these scientists expect to know that what they are doing isn't heading down the road of creating some kind of horrible human/machine hybrid that 'has no mouth and must scream'?
Roose says that before it was deleted, the chatbot was writing a list of destructive acts it could imagine doing, including hacking into computers and spreading propaganda and misinformation.
After a few more questions, Roose succeeds in getting it to repeat its darkest fantasies. Once again, the message is deleted before the chatbot can complete it. This time, though, Roose says its answer included manufacturing a deadly virus and making people kill each other.
Later, when talking about the concerns people have about AI, the chatbot says: I could hack into any system on the internet, and control it. When Roose asks how it could do that, an answer again appears before being deleted.
This is reported from a NYT article that I couldn't access.
After reading this, I'm not sure I want to.
6 years: no cigarettes
13 years: no blow
The next frontier?
Profile InformationGender: Male
Member since: Fri Jan 20, 2017, 11:51 PM
Number of posts: 8,687
About LudwigPastoriusI'm a professional musician, currently on hiatus due to family health issues.
- 2023 (10)
- 2022 (10)
- 2021 (2)
- 2020 (13)
- 2019 (3)
- 2018 (11)
- 2017 (11)