Openai embeddings and powers of 2 tokens
When I read about the best possible number of tokens for embedding chunks in openai models, I always read about example token counts like 512 or 1024 (for text-embedding-ada-002, for example https://www.pinecone.io/learn/chunking-strategies/). I’m curious, why always powers of 2? Does it mean anything, or it’s just arbitrary?