It's common for access to some web sites to be restricted to users from particular Internet Protocol (IP) addresses. This is usually in addition to some other identification and authentication method. But other IP addresses are often added to this "allow list" and these should not necessarily be trusted in the same way.
In a typical scenario, a web site hosted on the internet that is used to administer another web application might be restricted to the company's own IP addresses. Then the developers say they need to check something on the live site, or another server needs to index the content, or someone wants to work from home for a while, or the site needs to be demonstrated at a client's location. All these additional IP addresses are added to the "allow list". These restrictions may be being applied at a network firewall, traffic management system, at the web server, in the application itself, in intrusion detection systems or in log analytical software, or in many of these. These are difficult to manage and in time there will be many IP addresses that no-one knows why they are allowed unless they are carefully documented, and subject to a fixed time limit when they are confirmed again by an appropriate person or removed. These extra addresses are quite often hard for someone else to guess.
However, there is another area where IP addresses are added to "allow lists", and this is for remote monitoring and testing services. These might be checking uptime, response times, content changes, HTML validation or security testing. The service providers publish the IP addresses of the source systems so that companies can specifically allow access to their web sites. Since the number of these services is relatively small, it's not too difficult to find which one might give access to areas of a web site or web application that the public (and malicious people) should not be able to get to. The particular danger here is that the IP addresses might be excluded from monitoring and logging, and therefore even a diligent web site manager might not realise for example the uptime monitoring service is making unusual, or excessive, requests.
Although it is not likely a malicious person is using this "trusted" address unless routing has been compromised as well, problems can go undetected, from what might seem to be a legitimate source. The IP address may have been typed incorrectly, or worse, the restrictions/exceptions may not have been implemented correctly allowing more addresses to have the privileged access than intended. Not logging a user's session is privileged access.
Allow traffic through, but be very specific what is allowed and monitor what's going on. Review all the exceptions periodically. Be especially careful about anything that bypasses authentication (such as allowing a search engine to crawl restricted-access content) on an otherwise public site.