This content is for members only. Please and try your request again.

Nextgov: The AI Behind ChatGPT Looks to Visualize the World

In my previous NextGov column, I reviewed the new ChatGPT artificial intelligence, asking it to perform tasks as varied as programming in C++ to telling me a bedtime story. I even interviewed the AI about why some people are afraid of artificial intelligences and the importance of ethics as the science of AI moves forward.

I found that the ChatGPT AI from OpenAI was extremely adept at fielding just about any kind of question I could throw at it. Even though it’s not connected to the internet or any live data streams—so you can’t ask it about current events after 2021—it generally provided much better and more detailed information than you would ever find in something like a Google search. The AI is currently free to use, so everyone should give it a try. All you need to do is set up an account and get started…

It’s worth noting that the science of AI-based image generation is not completely noncontroversial. Most AIs like DALL-E have been trained by having the AI ingest some combination of images from classic masters, historical works of art, commercial artwork, photographs and other images on the internet. A graphical-based AI will then try and generate original art based on user requests and the information and artwork it’s learned about, but what happens if it draws too heavily from someone else’s work? Does that create a copyright issue? Also, once generated, who owns the created image? The AI, which is presumably owned by a company, generated the image, but the user crafted the description, sometimes in quite a bit of detail, that helped to make the finished product. As you can see, there are still quite a few legal and ethical issues to work out with AI-generated art.

However, let’s put that aside for now, and just have some fun with DALL-E. The thing to know about AI-generated images is that while the AI has potentially millions of points of data to draw from, it still needs humans to describe exactly what they want to see. If you ask for something simple, like a lemon tree, then you will probably get it, although it will likely look pretty generic. If instead you imagine a photograph of a wizened old farmer wearing faded denim picking a fat, overripe lemon in a verdant orchard at sunrise, then you better tell the AI all about your vision if you want it to generate anything close to that. The AI may be good, but it can’t yet read your mind…

Read the full article here.

 

This topic has 0 replies, 1 voice, and was last updated 2 months, 2 weeks ago by Jenny Reed.

  • Author
    Posts
    • #176957
       Jenny Reed
      • Writer
      • G2Xchange

      Replies viewable by members only

Viewing 0 reply threads

You must be logged in to reply to this topic.

CONTACT US

    Questions?. Send us an email and we'll get back to you, asap.

    1. Your Name (required)
    2. Email Address (required)
    3. Your Message

    * Required


    G2Xchange Health

    Log in with your credentials
    for G2Xchange Health

    Forgot your details?