ValueError: Image path is None/Invalid Imagedata dictionary

i tried loading my space as an API through gradio client and it went down smoothly but when i try calling the named api for prediction on an image which is stored locally on my system or be it an URL there’s an error on the space log as : pydantic_core._pydantic_core.ValidationError: 1 validation error for ImageData
Input should be a valid dictionary or instance of ImageData [type=model_type, input_value=‘/9j/4AAQSkZJRgABAQAAAQAB…ULMJAMBsggdfzODQi4rU//Z’, input_type=str]
For further information visit Redirecting...

i tried encoding it in base64 and i also updated my app.py script on how to process and return requests. but the issue persists. please any kind of help is appreciated.

2 Likes

bro got any solution ?? its 2 am now still struggling to resolve :frowning:

1 Like

It seems that the documentation says to use handle_file (do not send base64 directly)…

from gradio_client import Client, handle_file

client = Client("user/space-name")
res = client.predict(image=handle_file("./cat.jpg"), api_name="/predict") # local file
res = client.predict(image=handle_file("https://example.com/cat.jpg"), api_name="/predict") # URL

Does anyone have a solution to this ? I’m trying to integrate an API call in my android app through http for some image recognition, i don’t know how to pass in an local image. Only public URLs work.

1 Like

Endpoints only accept data, not URLs or local paths. Therefore, if utilities or libraries for automatic conversion are unavailable, you must manually convert and pass the data…


Use the Hugging Face Serverless Inference API and send the image bytes (or base64) to the model URL with a Bearer token. Do not send a local file path. The API endpoint format is:

https://huggingface.co/proxy/api-inference.huggingface.co/models/<MODEL_ID>

Example model id: google/vit-base-patch16-224. Docs verified 2025-10-15. (Hugging Face)

What the API expects

  • Auth: Authorization: Bearer hf_... using a fine-grained user token. (Hugging Face)

  • Payloads for image tasks:

    • Raw bytes body with Content-Type: image/jpeg (or png, webp).
    • JSON body with "inputs": "<base64 string>".
      Response for image-classification is an array of {label, score}. Docs updated format shown in the Image Classification API spec. Verified 2025-10-15. (Hugging Face)
  • Cold start: first call can return 503 while the model loads. Retry. (Hugging Face)

cURL quick tests

Binary bytes (recommended):

# Ref: https://huggingface.co/docs/inference-endpoints/supported_tasks  (binary example also applies)
#      https://huggingface.co/learn/cookbook/enterprise_hub_serverless_inference_api  (endpoint format)
/usr/bin/curl -sS \
  -X POST \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: image/jpeg" \
  --data-binary "@/path/to/local.jpg" \
  "https://huggingface.co/proxy/api-inference.huggingface.co/models/google/vit-base-patch16-224"

JSON base64:

# Ref: https://huggingface.co/docs/inference-providers/en/tasks/image-classification
b64=$(base64 -w0 </path/to/local.jpg)
curl -sS -X POST \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: application/json" \
  -d "{”inputs”:”$b64”}" \
  "https://huggingface.co/proxy/api-inference.huggingface.co/models/google/vit-base-patch16-224"

The Inference Providers spec explicitly allows base64 or raw bytes, and returns {label, score} objects. (Hugging Face)
HF’s supported-tasks reference shows the exact --data-binary pattern for images. (Hugging Face)

Android (OkHttp) — send local image bytes

// Ref:
// - API URL shape + 503 note: https://huggingface.co/learn/cookbook/enterprise_hub_serverless_inference_api
// - Payload options + response schema: https://huggingface.co/docs/inference-providers/en/tasks/image-classification
// - InferenceClient accepts bytes/paths/URLs for images: https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client

import android.content.Context
import android.net.Uri
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.RequestBody.Companion.toRequestBody

fun classifyLocalImage(
    context: Context,
    contentUri: Uri,
    hfToken: String,
    modelUrl: String = "https://huggingface.co/proxy/api-inference.huggingface.co/models/google/vit-base-patch16-224"
): String {
    val bytes = context.contentResolver.openInputStream(contentUri)!!.use { it.readBytes() }
    val body = bytes.toRequestBody("image/jpeg".toMediaType()) // set correct type if PNG/WebP
    val req = Request.Builder()
        .url(modelUrl)
        .addHeader("Authorization", "Bearer $hfToken")
        .addHeader("Content-Type", "image/jpeg")
        .post(body)
        .build()
    OkHttpClient().newCall(req).execute().use { resp ->
        if (!resp.isSuccessful) error("HTTP ${resp.code}: ${resp.body?.string()}")
        return resp.body!!.string() // e.g., [{"label":"tabby","score":0.99}, ...]
    }
}

Alternative: JSON base64 from Android

import android.util.Base64
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.RequestBody.Companion.toRequestBody

fun classifyLocalImageAsJson(
    context: Context,
    contentUri: Uri,
    hfToken: String,
    modelUrl: String
): String {
    val bytes = context.contentResolver.openInputStream(contentUri)!!.use { it.readBytes() }
    val b64 = Base64.encodeToString(bytes, Base64.NO_WRAP) // avoid newlines
    val json = """{"inputs":"$b64"}"""
    val body = json.toRequestBody("application/json".toMediaType())
    val req = Request.Builder()
        .url(modelUrl)
        .addHeader("Authorization", "Bearer $hfToken")
        .post(body)
        .build()
    OkHttpClient().newCall(req).execute().use { resp ->
        if (!resp.isSuccessful) error("HTTP ${resp.code}: ${resp.body?.string()}")
        return resp.body!!.string()
    }
}

Minimal checklist

  1. Build the URL: https://huggingface.co/proxy/api-inference.huggingface.co/models/<MODEL_ID>. (Hugging Face)
  2. Add Authorization: Bearer <HF_TOKEN>. Use a fine-grained token. (Hugging Face)
  3. Send bytes with Content-Type: image/<ext> or send JSON with "inputs": "<base64>". (Hugging Face)
  4. Handle 503 by retrying. Handle 429 by backing off. Rate limits for free users are a few hundred requests per hour; PRO increases limits. (Hugging Face)
  5. Expect image-classification output as [{"label": "...","score": ...}, ...]. (Hugging Face)

Common mistakes

  • Sending a file path string. The server cannot read client files. Send bytes or base64. (Hugging Face)
  • Wrong Content-Type. Match your actual image type, or use application/octet-stream. (Hugging Face)
  • Newlines in base64. Use no-wrap encoding on mobile to avoid bad JSON.
  • Assuming JSON is required. Raw image bytes are accepted when you do not pass parameters. (Hugging Face)

Short examples to copy

cURL, JPEG bytes:

# https://huggingface.co/docs/inference-endpoints/supported_tasks
curl -sS -X POST \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: image/jpeg" \
  --data-binary "@local.jpg" \
  "https://huggingface.co/proxy/api-inference.huggingface.co/models/google/vit-base-patch16-224"

cURL, JSON base64:

# https://huggingface.co/docs/inference-providers/en/tasks/image-classification
b64=$(base64 -w0 < local.jpg)
curl -sS -X POST \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: application/json" \
  -d "{”inputs”:”$b64”}" \
  "https://huggingface.co/proxy/api-inference.huggingface.co/models/google/vit-base-patch16-224"

Supplemental references

  • Serverless Inference API cookbook. Endpoint format, auth, rate limits, cold-start behavior. Updated frequently. Checked 2025-10-15. (Hugging Face)
  • Image Classification API spec. Exact request shapes. Base64 vs raw bytes. Response schema. Checked 2025-10-15. (Hugging Face)
  • Supported tasks reference. Shows concrete --data-binary image examples. Useful when crafting raw HTTP calls. Checked 2025-10-15. (Hugging Face)
  • User access tokens. Bearer token usage and management. Checked 2025-10-15. (Hugging Face)