Part 4/14:
They are essentially points in a vector space. Similar concepts are located closer together, enabling the system to recognize, for example, that "sunset shoes" with sunset-inspired colors are more aligned with user intent than mere color matches. For example, a user searching for Sunset-inspired evening shoes results in images that mirror the sunset gradient, indicating the system understands the feeling behind the query, not just the literal words.
How are embeddings created?