Something in between:
You don't need to duplicate data but load it "in chache" to Data Explorer - Microsoft official statement: Let’s imagine you have applications running on AWS, periodically storing logs in S3, or S3 being used as a staging layer, and you want to load that data in Azure Data Explorer for ad-hoc analysis & reporting.
Prior to the S3 ingestion support in ADX, depending on the volume & frequency of the incoming data, you might use an ETL process to move data from S3 to Azure blob before ingesting to ADX, or read the file content in AWS lambda or Azure functions, and ingest directly into ADX. The former approach requires you to duplicate the data, adding more cost and complexities, and the latter proves challenging especially if you are moving large files.