26 Apr 2022 |
Julius von Kohout | It is the default ingress that is installed with Kubeflow 1.5 and is able to properly forward Sheldon requests. | 16:53:22 |
Rachit Chauhan | can you print the output of kubectl get gw GATEWAY_NAME -n NAMESPACE -o yaml ? | 16:55:53 |
Julius von Kohout | Also internally only the gateway knative-local-gateway.knative-serving.svc.cluster.local works but not cluster-local-gateway.istio-system.svc.cluster.local. i will provide the three gateway YAMLs in an hour. | 16:59:21 |
Julius von Kohout | apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: cluster-local-gateway
namespace: istio-system
labels:
release: istio
spec:
selector:
app: cluster-local-gateway
istio: cluster-local-gateway
servers:
- hosts:
- ''
port:
name: http
number: 80
protocol: HTTP
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: knative-local-gateway
namespace: knative-serving
labels:
networking.knative.dev/ingress-provider: istio
serving.knative.dev/release: v0.22.1
spec:
selector:
app: cluster-local-gateway
istio: cluster-local-gateway
servers:
- hosts:
- ' '
port:
name: http
number: 8081
protocol: HTTP
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: kubeflow-gateway
namespace: kubeflow
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
and i think kubeflow uses https://github.com/kserve/kserve/blob/release-0.7/install/v0.7.0/kserve_kubeflow.yaml | 17:31:55 |
Julius von Kohout | kind: ConfigMap
apiVersion: v1
metadata:
name: kserve-config
namespace: kubeflow
labels:
app: kserve
app.kubernetes.io/name: kserve
data:
ingressGateway: kubeflow/kubeflow-gateway
kind: ConfigMap
apiVersion: v1
metadata:
name: config-istio
namespace: knative-serving
labels:
networking.knative.dev/ingress-provider: istio
serving.knative.dev/release: v0.22.1
data:
gateway.kubeflow.kubeflow-gateway: istio-ingressgateway.istio-system.svc.cluster.local | 17:41:52 |
Timos | Nope, not sure how to do it | 17:44:48 |
Rachit Chauhan | if you are trying to our endpoint reach via kubeflow-gateway then you can see what routes are setup for that controller using istioctl cli
istioctl proxy-config routes ISTIO_CONTROLLER_POD_NAME -n istio-system
| 19:03:07 |
Rachit Chauhan | check if the relevant virtual services are created and routes exist on this istio controller (which is essentially the envoy) | 19:04:42 |
Rachit Chauhan | might be helpful https://knative.dev/docs/serving/setting-up-custom-ingress-gateway/ | 19:25:46 |
Rachit Chauhan | Julius von Kohout: i am not sure which config map is this kserve-config https://kubeflow.slack.com/archives/CH6E58LNP/p1650994912193769?thread_ts=1650986776.568939&cid=CH6E58LNP | 22:38:03 |
27 Apr 2022 |
Dan Sun | Julius von Kohout what's the error you are getting? | 00:23:09 |
| zorba(손주형) joined the room. | 01:43:20 |
| zorba(손주형) changed their display name from _slack_kubeflow_U03CN7QAHN3 to zorba(손주형). | 01:44:27 |
| zorba(손주형) set a profile picture. | 01:44:32 |
zorba(손주형) | Hi!
Does kserve triton server can open promethues metric port?? | 01:44:32 |
Dan Sun | yes all you need is to add the prometheus annotations I believe | 01:45:48 |
zorba(손주형) | actually I did it and checked it in pod logs. but triton metrics are not shown in prometheus server | 01:46:43 |
Dan Sun | are you able to curl the metrics locally? | 01:47:47 |
zorba(손주형) | yes. and i installed prometheus and grafana with prometheus kube stack helm chart. should I do something to expose inferenceservice pod metric? | 01:49:19 |
Dan Sun | no, as long as prom server can scrape the pod metrics that should work | 01:56:37 |
zorba(손주형) | okay, but after I install istio grafana and prom addons seperately, triton metrics are collected and shown in grafana installed by istio addon | 01:58:15 |
Dan Sun | you mean it does not work with prometheus helm chart? | 02:00:30 |
zorba(손주형) | yes | 02:00:41 |
Dan Sun | I am not too sure about the difference but it is unlikely kserve issue | 02:00:59 |
zorba(손주형) | promeheus by helm chart get metric of inference service pod itself. but not triton metrics | 02:02:57 |
Dan Sun | what does the mean? triton container is in the isvc pod | 02:11:19 |
Dan Sun | so there are two containers in the isvc pod, queue proxy and triton container | 02:12:53 |
Dan Sun | are you saying it is getting the isvc queue proxy metrics but not triton ? | 02:13:18 |
Dan Sun | queue proxy metrics is exposed on port 9091 while triton expose the metrics on port 8002 I believe | 02:14:55 |
Dan Sun | if you set the annotation port to 8002 then it should get the triton metrics | 02:15:25 |