import type { WebGPUCacheSampler } from "./webgpuCacheSampler"; import type { WebGPUMaterialContext } from "./webgpuMaterialContext"; import type { WebGPUPipelineContext } from "./webgpuPipelineContext"; import type { WebGPUEngine } from "../webgpuEngine"; import type { WebGPUDrawContext } from "./webgpuDrawContext"; /** @internal */ export declare class WebGPUCacheBindGroups { static NumBindGroupsCreatedTotal: number; static NumBindGroupsCreatedLastFrame: number; static NumBindGroupsLookupLastFrame: number; static NumBindGroupsNoLookupLastFrame: number; private static _Cache; private static _NumBindGroupsCreatedCurrentFrame; private static _NumBindGroupsLookupCurrentFrame; private static _NumBindGroupsNoLookupCurrentFrame; private _device; private _cacheSampler; private _engine; disabled: boolean; static get Statistics(): { totalCreated: number; lastFrameCreated: number; lookupLastFrame: number; noLookupLastFrame: number; }; static ResetCache(): void; constructor(device: GPUDevice, cacheSampler: WebGPUCacheSampler, engine: WebGPUEngine); endFrame(): void; /** * Cache is currently based on the uniform/storage buffers, samplers and textures used by the binding groups. * Note that all uniform buffers have an offset of 0 in Babylon and we don't have a use case where we would have the same buffer used with different capacity values: * that means we don't need to factor in the offset/size of the buffer in the cache, only the id * @param webgpuPipelineContext * @param drawContext * @param materialContext * @returns a bind group array */ getBindGroups(webgpuPipelineContext: WebGPUPipelineContext, drawContext: WebGPUDrawContext, materialContext: WebGPUMaterialContext): GPUBindGroup[]; }