!qdLBZhKgcImrdxWBWc:matrix.org

minikf

132 Members
1 Servers

Load older messages


SenderMessageTime
22 Nov 2021
@_slack_kubeflow_UM56LA7N3:matrix.orgBenjamin Tan
In reply toundefined
In particular: https://www.kubeflow.org/docs/components/notebooks/container-images/#custom-images
03:06:38
@_slack_kubeflow_UM56LA7N3:matrix.orgBenjamin Tan
In reply toundefined
(edited) ... particular: <https://www.kubeflow.org/docs/components/notebooks/container-images/> => ... particular: <https://www.kubeflow.org/docs/components/notebooks/container-images/#custom-images>
03:06:48
@_slack_kubeflow_U029JUWQKLN:matrix.orgArvind Gupta
In reply to@_slack_kubeflow_UFCE0TABE:matrix.org
adrian lee as messaged u over DM, I believe your pipeline is hitting the limits of K8s pods resources. MiniKF is only meant for the very light weight jobs to get familiarized with Kubeflow. Would you be able to share you pipeline code ?
06:14:24
@_slack_kubeflow_U02DWFSKS1F:matrix.orgAlex Aidun
In reply to@_slack_kubeflow_UM56LA7N3:matrix.org
In particular: https://www.kubeflow.org/docs/components/notebooks/container-images/#custom-images
Div Dasani thanks for reaching out - the docker images provided by Arrikto for MiniKF make sure that the modifications required to run Kale and Rok are available in the Notebook Server so you can take advantage of these two technologies. Therefore replacing these with one that does not have similar modifications can result in MiniKF not behaving as expected. That being said, can you shed some light on what you are looking to accomplish and maybe I can help? Or are you just looking to try a different docker image?
19:18:55
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala
In reply to@_slack_kubeflow_U02DWFSKS1F:matrix.org
Div Dasani thanks for reaching out - the docker images provided by Arrikto for MiniKF make sure that the modifications required to run Kale and Rok are available in the Notebook Server so you can take advantage of these two technologies. Therefore replacing these with one that does not have similar modifications can result in MiniKF not behaving as expected. That being said, can you shed some light on what you are looking to accomplish and maybe I can help? Or are you just looking to try a different docker image?
Thanks Alex Aidun & Benjamin Tan for your comments. So what Div is trying to achieve is serve the TF model using a special custom docker image mentioned above. That image has a specific version of TF Serving that is needed as it implements few optimizations that are not available in vanilla TF Serving.
20:00:12
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets joined the room.22:54:26
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
Thanks Alex Aidun & Benjamin Tan for your comments. So what Div is trying to achieve is serve the TF model using a special custom docker image mentioned above. That image has a specific version of TF Serving that is needed as it implements few optimizations that are not available in vanilla TF Serving.
Parin Choganwala Div Dasani Is the above image a Notebook server image that also contains that specific TF version you want to use from inside the Notebook? Or is it just a TF Serving image that you want to spin up on its own in Kubeflow? If it is the first, then you just need to click “Custom” Image and put the image URL in the input box, while creating a new Notebook Server. If it is the latter, then Kale in MiniKF uses Kubeflow’s KFServing (not TF Serving) underneath to serve the models. KFServing supports TensorFlow models. If you want something very custom, you will need to manually setup KFServing to use a custom image.
22:54:26
@_slack_kubeflow_U02MVQEKDR6:matrix.orgDiv Dasani
In reply to@_slack_kubeflow_UCGU0F9K3:matrix.org
Parin Choganwala Div Dasani Is the above image a Notebook server image that also contains that specific TF version you want to use from inside the Notebook? Or is it just a TF Serving image that you want to spin up on its own in Kubeflow? If it is the first, then you just need to click “Custom” Image and put the image URL in the input box, while creating a new Notebook Server. If it is the latter, then Kale in MiniKF uses Kubeflow’s KFServing (not TF Serving) underneath to serve the models. KFServing supports TensorFlow models. If you want something very custom, you will need to manually setup KFServing to use a custom image.
It's the first, and we don't have a URL for the image (I don't believe docker provides one)
22:55:27
23 Nov 2021
@_slack_kubeflow_U029VNQ2YQZ:matrix.org_slack_kubeflow_U029VNQ2YQZ joined the room.11:59:18
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala
In reply to@_slack_kubeflow_U02MVQEKDR6:matrix.org
It's the first, and we don't have a URL for the image (I don't believe docker provides one)
cvenets That was a useful information to have! Thank You! I spend some more time with MiniKF yesterday. I and Div will start a separate thread on what exactly we are trying to achieve.
14:04:40
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin ChoganwalaScreen Shot 2021-11-23 at 9.11.22 AM.png
Download Screen Shot 2021-11-23 at 9.11.22 AM.png
14:11:50
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala cvenets The following is what we want to do. We have a model trained using TF and saved on s3 already (This training has happened outside MiniKF's Notebook Server). Now we want to create a Model Server on Models page on the MiniKF's UI. I used following yaml to create the configs.
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
  name: "user-model"
spec:
  predictor:
    tensorflow:
      storageUri: " s3://dplus-dev-events-schema/models/mf_history_pooling/v1/user_model_with_pre_proc/ "
It's been about 30 mins since i created the model server and the system is still doing something. Here is the screenshot. Are we going on a right track here?
14:11:51
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala (edited) ... ```apiVersion: "<http://serving.kserve.io/v1beta1|serving.kserve.io/v1beta1>" kind: ... => ... ```apiVersion: "<http://serving.kubeflow.org/v1beta1|serving.kubeflow.org/v1beta1>" kind: ... 14:12:47
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala (edited) ... i crated the server and the system is still doing something. Here is ... => ... i created the model server and the system is still doing something. Here is ... 14:13:27
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
cvenets That was a useful information to have! Thank You! I spend some more time with MiniKF yesterday. I and Div will start a separate thread on what exactly we are trying to achieve.
> It’s the first, and we don’t have a URL for the image (I don’t believe docker provides one) Div Dasani the above you shared is the URL unless I’m misunderstanding something.
18:42:15
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
cvenets The following is what we want to do. We have a model trained using TF and saved on s3 already (This training has happened outside MiniKF's Notebook Server). Now we want to create a Model Server on Models page on the MiniKF's UI. I used following yaml to create the configs.
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
  name: "user-model"
spec:
  predictor:
    tensorflow:
      storageUri: " s3://dplus-dev-events-schema/models/mf_history_pooling/v1/user_model_with_pre_proc/ "
It's been about 30 mins since i created the model server and the system is still doing something. Here is the screenshot. Are we going on a right track here?
Parin Choganwala this seems like a different case from what you are describing below, correct? https://kubeflow.slack.com/archives/CGRKM3N0G/p1637621726037900?thread_ts=1637364972.036200&cid=CGRKM3N0G I suggest you first try to download the model from S3 inside a Notebook and use Kale to serve it automatically (and see it in the Models UI) first, so you don’t go into compiling yaml files manually. Please take a look at tutorial 3 or 4 on how to serve a trained model here: arrikto.com/tutorials On the actual problem you are facing, can you confirm that the S3 URI you are providing to KFServing is publicly accessible on S3 and there is no need to login to download?
18:50:25
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin ChoganwalaRedacted or Malformed Event19:03:59
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala (edited) ... forward. => ... forward. Also S3 URI is not publicly accessible. Is this a strict requirement with MiniKF? 19:09:26
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_UCGU0F9K3:matrix.org
Parin Choganwala this seems like a different case from what you are describing below, correct? https://kubeflow.slack.com/archives/CGRKM3N0G/p1637621726037900?thread_ts=1637364972.036200&cid=CGRKM3N0G I suggest you first try to download the model from S3 inside a Notebook and use Kale to serve it automatically (and see it in the Models UI) first, so you don’t go into compiling yaml files manually. Please take a look at tutorial 3 or 4 on how to serve a trained model here: arrikto.com/tutorials On the actual problem you are facing, can you confirm that the S3 URI you are providing to KFServing is publicly accessible on S3 and there is no need to login to download?
Parin Choganwala let’s keep everything on a single thread, so all context is in one place. Bringing your message to the channel here: https://kubeflow.slack.com/archives/CGRKM3N0G/p1637694238046200
21:04:17
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala
In reply to@_slack_kubeflow_UCGU0F9K3:matrix.org
Parin Choganwala let’s keep everything on a single thread, so all context is in one place. Bringing your message to the channel here: https://kubeflow.slack.com/archives/CGRKM3N0G/p1637694238046200
Yes it’s a different use case than what we described yesterday. I will check those tutorials to move forward. Also S3 URI is not publicly accessible. Is this a strict requirement with MiniKF?
21:05:30
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
Yes it’s a different use case than what we described yesterday. I will check those tutorials to move forward. Also S3 URI is not publicly accessible. Is this a strict requirement with MiniKF?
> Also S3 URI is not publicly accessible. OK. This is most probably the problem then. The KFServing pod doesn’t have access to your S3 bucket to download the model. This doesn’t have to do with MiniKF.
21:11:58
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala
In reply to@_slack_kubeflow_UCGU0F9K3:matrix.org
> Also S3 URI is not publicly accessible. OK. This is most probably the problem then. The KFServing pod doesn’t have access to your S3 bucket to download the model. This doesn’t have to do with MiniKF.
Great to know! I will have to create a separate bucket and put model in there making sure it's publicly available.
21:13:17
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
Great to know! I will have to create a separate bucket and put model in there making sure it's publicly available.
I completed this tutorial. It turns out that as long as I have model loaded in the notebook server I can mirror the steps explained in the tutorial above to serve the model. No need to worry about where is it located. right?
21:23:18
@_slack_kubeflow_U02M8CN8J0L:matrix.orgParin Choganwala Does kale has any requirements for which version of tensorflow It can work with/support? 23:20:49
24 Nov 2021
@_slack_kubeflow_U02MVQEKDR6:matrix.orgDiv Dasani Hi team, I am trying to use kale.common.serveutils.serve with a tfrs.layers.factorized_top_k.ScaNN model, and running into issues. My model looks something like:
scann_index = tfrs.layers.factorized_top_k.ScaNN(
    #some_params
)
scann_index.index(
    candidate_vectors,
    np.array(vocab)
)
In particular, running type(scann_index) returns tensorflow_recommenders.layers.factorized_top_k.ScaNN. However, when attempting to serve the model:
kfserver = serve(scann_index, predictor='tensorflow')
I get the RuntimeError Trying to create an InferenceService with predictor of type 'tensorflow' but the model is of type 'None' Any ideas what's going on here?
01:08:39
@_slack_kubeflow_U02N8CR9PAS:matrix.org_slack_kubeflow_U02N8CR9PAS joined the room.02:40:13
@_slack_kubeflow_U02NTQBE3PT:matrix.org_slack_kubeflow_U02NTQBE3PT joined the room.03:13:41
@_slack_kubeflow_UCGU0F9K3:matrix.orgcvenets
In reply to@_slack_kubeflow_U02M8CN8J0L:matrix.org
I completed this tutorial. It turns out that as long as I have model loaded in the notebook server I can mirror the steps explained in the tutorial above to serve the model. No need to worry about where is it located. right?
Exactly! MiniKF uses Rok underneath to store everything natively in K8s PVCs, snapshot and version them, and make them instantly available in the KFServing instance which then loads the model from its local PVC, which was cloned from the versioned snapshot 🙂
20:12:51
@_slack_kubeflow_U02NGBU11M3:matrix.orgMarcelo Grammatico2 joined the room.23:54:55
29 Nov 2021
@_slack_kubeflow_UGZAH4DBQ:matrix.orgJoshBottum
In reply to@_slack_kubeflow_U02MVQEKDR6:matrix.org
Hi team, I am trying to use kale.common.serveutils.serve with a tfrs.layers.factorized_top_k.ScaNN model, and running into issues. My model looks something like:
scann_index = tfrs.layers.factorized_top_k.ScaNN(
    #some_params
)
scann_index.index(
    candidate_vectors,
    np.array(vocab)
)
In particular, running type(scann_index) returns tensorflow_recommenders.layers.factorized_top_k.ScaNN. However, when attempting to serve the model:
kfserver = serve(scann_index, predictor='tensorflow')
I get the RuntimeError Trying to create an InferenceService with predictor of type 'tensorflow' but the model is of type 'None' Any ideas what's going on here?
Div Dasani thanks for your report. I will ask around and get back.
22:10:15

There are no newer messages yet.


Back to Room List