If you were a Star Trek fan like myself growing up, you might remember a funny device called the Heisenberg Compensator. It was used when transporting people to account for the affects of a theory called the uncertainty principle, which basically states that the more precisely you try to determine something, the less precise it becomes. OK - the geeky mechanics lesson is over for now.
In the field of Information Technology, this is commonly extrapolated as the Observer Effect. In essence, when we try to monitor and observe servers, networks, and applications, we are changing the results. This makes sense, as it does take some network bandwidth, CPU cycles, and other resources to effectively monitor an environment. However, the impact of this is highly dependent on where the monitoring takes place. Which brings me to an agent vs agentless question.
What are your thoughts on using agents for monitoring your environment? The typical argument goes something like this: Agents provide richer data streams while adding load to the application, while agentless can only pull limited data (based on the API and security access permissions) but can isolate the load to an external resource and do not impact the application as severely. Has this been your mindset / experience, or have you encountered the opposite ... or something in between?