Sitemap

Exposing Fixed Ports to External Applications in OpenShift Bare Metal Using External IP’S

3 min readMay 9, 2025

In a recent challenge we faced while operating our applications in an OpenShift bare metal environment, we had to figure out how to expose internal services to a robot application running outside the cluster. This presented constraints that made typical exposure models, such as NodePort or LoadBalancerineffective

The Problem

We had an application running inside our OpenShift cluster that needed to communicate with a robot client outside the cluster. The robot was configured to connect to a fixed port range (e.g., 5000–5008), and modifying these ports wasn’t an option.

However NodePort services only expose ports in a restricted range (default: 30000–32767).

To complicate things further:

  • The robot was hardcoded to use specific ports outside this range.
  • We had to ensure direct accessibility from the external client without modifying port mappings.
  • LoadBalancer services weren’t viable in our bare-metal setup without a cloud provider integration.

So we needed an alternative approach.

The Solution: External IPs

We solved the problem using External IPs, which allow Kubernetes services to be exposed directly via node IP addresses and arbitrary ports — even outside the default NodePort range.

What We Did

1. Identify a Reachable Node IP

We selected a worker node with a reachable IP (one that the robot network could connect to):

oc get nodes -o wide

worker-node1 Ready worker ... INTERNAL-IP: 192.168.100.101

2. Allow the Node IP in the External IP Policy

OpenShift restricts the use of external IPs by default. To allow our node IP, we edited the cluster network configuration:

oc edit network.operator cluster

spec:
externalIP:
policy:
allowedCIDRs:
- 192.168.100.101/32 # Allow exact node IP
rejectedCIDRs:
- 0.0.0.0/0 # Block all others

Under the spec.externalIP.policy section:

This configuration tells OpenShift to allow only the specified IP to be used for external access.

3. Create a Service with External IP

We defined a ClusterIP service with an externalIPs field:

Verify :

ssh user@nodeip

check the listening port

$ss -tunl | grep -i 5000
oc apply -f robot-connector-service.yaml

apiVersion: v1
kind: Service
metadata:
name: robot-service
namespace: my-app
spec:
selector:
app: my-application
ports:
- protocol: TCP
port: 5000 # Pod port
targetPort: 5000 # Robot expects this port
externalIPs:
- <NODEIP> # Node IP allowed in the policy

4. Robot Configuration

The robot was configured to connect directly to the Node IP on its fixed ports:

curl http://<NodeIP>:5000

The traffic reached theexternalIP, which OpenShift routed to the right pod.

Note: Make sure port 5000 is not already used on the node. If another service (e.g., a system daemon or container runtime) is already bound to that port, the traffic will not be forwarded correctly and may cause connection failures.

Why This Worked

  • External IPs allow Kubernetes to listen on any port on the node — not limited to the NodePort range.
  • The robot app didn’t need modification.
  • No need for a cloud-based LoadBalancer or Ingress Controller.

Considerations

  • You need to ensure firewall rules allow traffic on those ports on the node.
  • Services using external IPs are not load balanced unless you configure it at a lower network layer (e.g., keepalived, HAProxy).
  • Node-level availability is important. If the selected node goes down, so does external access to that port.

Conclusion

This use case illustrates how External IPs in OpenShift can be a powerful tool when dealing with constraints that fall outside typical Kubernetes service patterns.

While not always ideal for large-scale scenarios, they provide a pragmatic and effective way to expose specific ports, especially when legacy or external systems are involved and configuration changes are limited.

If you have any questions or feedback, feel free to comment.

About The Author
Suraj Solanki
Senior DevOps Engineer
LinkedIn: https://www.linkedin.com/in/suraj-solanki
Topmate: https://topmate.io/suraj_solanki

--

--

Suraj Solanki
Suraj Solanki

Written by Suraj Solanki

Senior DevOps Engineer | Enthusiast of cloud & automation | Always learning & sharing insights | Connect me on https://www.linkedin.com/in/suraj-solanki

No responses yet