Skip to main content
POST
/
rvenc
/
audio
/
transcriptions
Encrypted Audio Transcriptions (RVENC)
curl --request POST \
  --url https://proxy.cci.prem.io/rvenc/audio/transcriptions \
  --header 'Authorization: <api-key>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "encryptedInference": "<string>",
  "cipherText": "<string>",
  "nonce": "<string>",
  "encryptedFileName": "<string>",
  "fileNameNonce": "<string>",
  "encryptedFile": "<string>",
  "fileNonce": "<string>"
}
'
{
  "status": 200,
  "data": {
    "id": "123"
  },
  "error": null,
  "log": null,
  "validator": null,
  "support_id": null,
  "message": "Resource created successfully",
  "env": "development"
}

TypeScript SDK

The SDK is available on GitHub: premAI-io/pcci-sdk-ts

Usage

Basic Setup

Create a client with auto-generated encryption keys:
import createRvencClient from "./index";
import fs from "fs";

const client = await createRvencClient({
  apiKey: "your-api-key",
});

Pre-generate Keys

You can pre-generate encryption keys and reuse them:
import createRvencClient, { generateEncryptionKeys } from "./index";

const encryptionKeys = await generateEncryptionKeys();

const client = await createRvencClient({
  apiKey: "your-api-key",
  encryptionKeys,
  requestTimeoutMs: 60000,  // optional
  maxBufferSize: 20 * 1024 * 1024, // optional
});

Basic Transcription

const transcription = await client.audio.transcriptions.create({
  file: fs.createReadStream('./audio.wav'),
  model: 'openai/whisper-large-v3',
});

console.log(transcription.text);

Transcription with Options

const transcription = await client.audio.transcriptions.create({
  file: fs.createReadStream('./audio.mp3'),
  model: 'openai/whisper-large-v3',
  language: 'en',
  prompt: 'This is a discussion about artificial intelligence.',
  response_format: 'verbose_json',
  temperature: 0,
  timestamp_granularities: ['word', 'segment'],
});

Configuration

OptionDefaultDescription
filerequiredAudio file (ReadStream, Buffer, Blob, etc.)
modelrequiredModel ID (e.g., 'openai/whisper-large-v3')
languageauto-detectISO-639-1 language code (e.g., 'en', 'de', 'fr')
promptoptionalText prompt to guide the model
response_format'json'Format: 'json', 'text', or 'verbose_json'
temperature0Sampling temperature (0-1)
timestamp_granularitiesoptionalArray of 'word' and/or 'segment' for timestamps
Note: openai/whisper-large-v3 model supports non-streaming only.

OpenAI-Compatible API Server

Run as a standalone server with automatic DEK store management per API key:
npm start
# Server runs on http://localhost:3000
Use with any OpenAI-compatible client:
curl http://localhost:3000/v1/audio/transcriptions \
  -H "Authorization: Bearer your-api-key" \
  -F "[email protected]" \
  -F "model=openai/whisper-large-v3"
Or use the SDK directly in Node.js:
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "your-api-key",
  baseURL: "http://localhost:3000/v1",
});

const transcription = await client.audio.transcriptions.create({
  file: fs.createReadStream('./audio.mp3'),
  model: 'openai/whisper-large-v3',
});

console.log(transcription.text);
The server caches clients in memory per API key for better performance.

Authorizations

Authorization
string
header
required

Send your access token as header Authorization: Bearer {accessToken}

Authorization
string
header
required

Your API key that starts with sk_live or sk_test. You can create yours at go.prem.io/api-keys.

Body

application/json

Request body for rvenc (raw volatile encrypted) audio transcription. Contains an encrypted payload with cryptographic materials needed for decryption.

encryptedInference
string
required

Encrypted JSON string containing all audio transcription parameters. When decrypted, this string must match the structure shown in the expandable _decryptedInference property below (reference only - do not send this property).

cipherText
string
required

Cipher text for shared secret generation (ECDH key exchange)

nonce
string
required

Nonce used for encrypting the inference payload

encryptedFileName
string
required

The encrypted audio file name. The file name is encrypted using XChaCha20-Poly1305 with the shared secret and fileNameNonce.

fileNameNonce
string
required

Nonce used for encrypting the encrypted audio file name

encryptedFile
string
required

The encrypted audio file data. The audio content is encrypted using XChaCha20-Poly1305 with the shared secret and fileNonce.

fileNonce
string
required

Nonce used for encrypting the encrypted audio file

Response

Encrypted audio transcription response. The response contains the encrypted transcription result which must be decrypted using the shared secret derived from the request's cipherText, along with the response nonce.

status
enum<integer>
required

Status code of the response

Available options:
200,
201,
202
data
object
required

Encrypted audio transcription response. Contains the encrypted transcription result that must be decrypted using the shared secret derived from the request's cipherText.

Example:
{
"encryptedResponse": "a1b2c3d4e5f67890abcdef1234567890abcdef123456789012345678901234567890abcdef...",
"nonce": "f6e5d4c3b2a19876543210fedcba9876"
}
message
string | null
required

Message of the response, human readable

Example:

"Resource created successfully"

env
enum<string>
required

API environment

Available options:
development,
production
error
string | null

Error message of the response, human readable

Example:

"Invalid email address"

log

Useful informaiton, not always present, to debug the response

Examples:
{ "request_id": "req_1234567890" }

"Some pertinent log message"

validator

Validator response object, each key is the field name and value is the error message

Example:
{
"email": "Invalid email address",
"password": "Password is required"
}
support_id
string<uuid> | null

Support ID linked to the response, used to identify it when talking with our team

Example:

"support_uuidv7-something-else"