Where the ports live
-
The ports defined in the App Segment (e.g., TCP 1–65535 except 53) are remote ports — they represent what the connector should probe on the application servers behind ZPA.
-
They are not local listener ports on the connector itself. The connector uses these definitions to run outbound reachability tests to the app’s destination IP(s).
What happens with your wildcard
-
App segment config
-
*.cordero.me
-
Ports: all TCP/UDP except 53
-
-
DNS resolution
-
Connector does DNS lookup for
app1.cordero.me
→10.222.1.222
.
-
-
Health check cycle
-
The connector now runs reachability tests to
10.222.1.222
on every port listed in the app segment (all ports except 53). -
That includes 443 (the real one you need) and thousands of others (80, 3389, 5000, 1025, etc.).
-
-
Health status reporting
-
If 443 is up, the connector reports “app1.cordero.me:443 Up” to ZPA CA.
-
If some other port is closed (say 21/FTP), it reports that as “Down,” even though you don’t care about that service.
-
-
Brokered session
-
When your PC (via ZCC) requests
https://app1.cordero.me
, the CA looks at health reports for 443 only, finds which connector(s) are “Up,” and assigns one of them to broker. -
The fact that the connector wasted time testing thousands of other ports doesn’t affect this session directly, but it eats connector capacity and inflates health-check cycles.
-
Why this matters
-
With a wildcard + all ports, the connector is doing remote reachability checks across thousands of ports it will never use.
-
This is why you see the “6,000 check” recommendation — if your segment creates more than ~6,000 remote probes per cycle, health checks lag and connector CPU is wasted.
Notes:
-
The ports are remote app ports.
-
The connector doesn’t wait for the client to request 443 — it proactively checks all ports in the segment every cycle.
-
This is why you: define only the actual app ports (e.g., just 443 for
app1.cordero.me
), not everything.