Share.

    1 Kommentar

    1. the_red_scimitar on

      It helps to understand how LLMs carry out graphic commands. Their convolution output produces, instead of natural language, a series of commands to an API designed for the purpose, to generate an image.

      They have no idea what it will look like, and don’t review it before presenting it. So the big thing here is not the „imagine“ thing – that’s just producing the image like it does now, but instead of showing it to you, they’ll first use another app, which determines content of an image, and then see if it’s a reasonable version of what you asked for.

    Leave A Reply