Announcement

Collapse
No announcement yet.

AI Megathread

Collapse
This is a sticky topic.
X
X
  • Filter
  • Time
  • Show
Clear All
new posts
  • #41

    A few pieces here.

    Intel's Loihi roadmap calls for its brain chips to be as 'smart' as a mouse by 2019

    Intel said this week that a system based on its Loihi chip planned for 2019 will include the equivalent of 100 billion synapses, which is about the same brain complexity as a common mouse.

    Last September, Intel introduced the world to Loihi, a chip designed for what Intel calls probabilistic computing. Intel sees probabilistic computing as an important step on the road to artificial intelligence.

    Unlike a Core chip, which uses a sequential pipeline of instructions, Loihi is designed to mimic the way the brain works. The version of the Loihi chip that Intel introduced last year included 130,000 silicon “neurons” connected with 130 million “synapses,” the junctions that in humans connect the neurons within the brain.
    And now... *whistles Twilight Zone theme*

    Google’s AI is learning to navigate like humans

    The company’s DeepMind artificial intelligence subsidiary has developed an AI that has learned how to navigate like a human being, the company announced in a blog post. Specifically, DeepMind’s AI has developed a system of spacial awareness that mimics human’s and other mammal’s grid cells–specific cells in the brain that allow for vector-based navigation, which allow us to calculate the direction and a distance to a location even if we’ve never traveled that route before. What’s most impressive about the AI’s mimicking of mammalian grid cells is that the AI did it on its own–it wasn’t programmed to mimic them.
    Rusakov's Signature
    "You drongos will have to do better than that if you want to beat the devil!"-Hugh Dawkins, alias: Tasmanian Devil

    Comment

    • #42

      I also saw that Google was able to call out objects for people with visual impairments. I am particularly interested in this as it's something I've wanted for years. Maybe with the DeepMind navigation, it might eventually be able to help me get places. It's easy to get paranoid about all of this stuff, but we can't forget that it can genuinely help people too.
      Daryn's Signature



      “Just when you think humanity has found the limits of stupid, they go and ratchet up the standard by another notch.” - Bob

      Comment

      • #43

        Artificial intelligence has learned to probe the minds of other computers

        Anyone who’s had a frustrating interaction with Siri or Alexa knows that digital assistants just don’t get humans. What they need is what psychologists call theory of mind, an awareness of others’ beliefs and desires. Now, computer scientists have created an artificial intelligence (AI) that can probe the “minds” of other computers and predict their actions, the first step to fluid collaboration among machines—and between machines and people.

        “Theory of mind is clearly a crucial ability,” for navigating a world full of other minds says Alison Gopnik, a developmental psychologist at the University of California, Berkeley, who was not involved in the work. By about the age of 4, human children understand that the beliefs of another person may diverge from reality, and that those beliefs can be used to predict the person’s future behavior. Some of today’s computers can label facial expressions such as “happy” or “angry”—a skill associated with theory of mind—but they have little understanding of human emotions or what motivates us.

        The new project began as an attempt to get humans to understand computers. Many algorithms used by AI aren’t fully written by programmers, but instead rely on the machine “learning” as it sequentially tackles problems. The resulting computer-generated solutions are often black boxes, with algorithms too complex for human insight to penetrate. So Neil Rabinowitz, a research scientist at DeepMind in London, and colleagues created a theory of mind AI called “ToMnet” and had it observe other AIs to see what it could learn about how they work.
        Rusakov's Signature
        "You drongos will have to do better than that if you want to beat the devil!"-Hugh Dawkins, alias: Tasmanian Devil

        Comment

        • #44

          That sound you hear is all the anti-AI pundits screaming. I think this is a really awesome development, but this sort of thing always freaks people out.
          Daryn's Signature



          “Just when you think humanity has found the limits of stupid, they go and ratchet up the standard by another notch.” - Bob

          Comment

          • #45

            Taking machine thinking out of the black box | MIT News

            Software applications provide people with many kinds of automated decisions, such as identifying what an individual's credit risk is, informing a recruiter of which job candidate to hire, or determining whether someone is a threat to the public. In recent years, news headlines have warned of a future in which machines operate in the background of society, deciding the course of human lives while using untrustworthy logic.

            Part of this fear is derived from the obscure way in which many machine learning models operate. Known as black-box models, they are defined as systems in which the journey from input to output is next to impossible for even their developers to comprehend.

            "As machine learning becomes ubiquitous and is used for applications with more serious consequences, there's a need for people to understand how it's making predictions so they'll trust it when it's doing more than serving up an advertisement," says Jonathan Su, a member of the technical staff in MIT Lincoln Laboratory's Informatics and Decision Support Group.
            Rusakov's Signature
            "You drongos will have to do better than that if you want to beat the devil!"-Hugh Dawkins, alias: Tasmanian Devil

            Comment

            • #46

              Maintenance: This Chorus site will return shortly.

              Some of the biggest hurdles in the field of artificial intelligence are preventing such software from developing the same intrinsic faults and biases as its human creators, and using AI to solve social issues instead of simply automating tasks. Now, Google, one of the world’s leading organizations developing AI software today, is launching a global competition to help spur the development of applications and research that have positive impacts on the field and society at large.

              The competition, called the AI Impact Challenge, was announced today at an event called AI for Social Good held at the company’s Sunnyvale, California office, and it’s being overseen and managed by the company’s Google.org charitable arm. Google is positioning it as a way to integrate nonprofits, universities, and other organizations not within the corporate and profit-driven world of Silicon Valley into the future-looking development of AI research and applications. The company says it will award up to $25 million to a number of grantees to “help transform the best ideas into action.” As part of the contest, Google will offer cloud resources for the projects, and it is opening applications starting today. Accepted grantees will be announced at next year’s Google I/O developer conference.
              Rusakov's Signature
              "You drongos will have to do better than that if you want to beat the devil!"-Hugh Dawkins, alias: Tasmanian Devil

              Comment

              • #47

                Two for today.

                Deep learning algorithm detects Alzheimer’s up to six years before doctors

                A powerful new deep learning algorithm has been developed that can study PET scan images and effectively detect the onset of Alzheimer's disease up to six years earlier than current diagnostic methods. The research is part of a new wave of work using machine learning technology to identify subtle patterns in complex medical imaging data that human clinicians are unable to pick up.
                Reinforcement Learning with Prediction-Based Rewards

                We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on Montezuma’s Revenge. RND achieves state-of-the-art performance, periodically finds all 24 rooms and solves the first level without using demonstrations or having access to the underlying state of the game.
                Rusakov's Signature
                "You drongos will have to do better than that if you want to beat the devil!"-Hugh Dawkins, alias: Tasmanian Devil

                Comment

                Working...
                X