ONVIF Profile M standard for video surveillance with object detection and analytics interoperability

Key takeaways

ONVIF Profile M standardizes analytics metadata, not pixels. It lets a camera from Axis, a VMS from Milestone, and an MQTT broker from AWS agree on what “person crossing line at 14:22:03” looks like on the wire.

Mandatory vs optional matters more than the Profile M badge. RTP/XML metadata streaming and the analytics service are mandatory; MQTT JSON events, rule engine, geolocation, and face/LPR attributes are optional — always read the Declaration of Conformance (DoC) before buying.

Adoption is lopsided. Axis, Bosch, Dallmeier, Hanwha and Milestone XProtect lead; mid-tier and budget brands (Hikvision, Dahua) still lean on proprietary SDKs. Plan for a mixed fleet for at least another 2–3 years.

Detection capacity is a hardware question, not a Profile M question. Profile M transports metadata; the number of plates/faces per frame depends on sensor resolution, WDR, frame rate, analytics SoC (Ambarella CV, Jetson, Axis ARTPEC) and your confidence threshold.

Custom integration is where most projects stall. Metadata archival, class taxonomy mapping, NTP drift, and MQTT event storms are the four recurring traps — budget 150–300 engineering hours for a serious Profile M consumer, less if you are using Agent Engineering.

Why Fora Soft wrote this playbook

Fora Soft has been shipping video-streaming and surveillance software for 21 years. Of the 625+ products we have delivered, the IP-camera, VMS, body-worn, courtroom recording, and video analytics projects share one recurring pain: metadata. Streams are easy — RTSP, H.264, H.265, done. Metadata is where multi-vendor deployments fall apart, and where ONVIF Profile M is supposed to help.

This playbook pulls together what we have learned integrating Profile M and the surrounding ONVIF analytics stack on real projects: what the spec actually requires, what the camera marketing team quietly omits, how to validate a DoC in five minutes, how to wire a Profile M stream into a VMS that was originally built around proprietary SDKs, and when you should simply skip Profile M and use a vendor SDK directly. See our video surveillance services and project portfolio for context on the scale and industries we work in.

Stuck choosing Profile M vs a proprietary SDK?

A 30-minute call with a Fora Soft engineer will save you weeks of integrator-level trial and error on camera shortlists and VMS architecture.

Book a 30-min call → WhatsApp → Email us →

What ONVIF Profile M actually standardizes

ONVIF Profile M is the metadata and analytics profile published by the Open Network Video Interface Forum in 2021. Where Profile S and Profile T handle video streaming, Profile M handles everything that describes what is inside the stream — object detections, events, scene descriptions, and the configuration of the analytics module that produces them. For a deeper tour of the other profiles (S, G, T, C, D, A) see our companion piece, ONVIF Profiles in Security Systems.

In concrete terms, Profile M guarantees three things across compliant cameras, encoders, and VMS clients:

1. A mandatory metadata stream. Object bounding boxes, center-of-gravity, class labels and simple attributes are serialized as ONVIF Scene Description XML inside an RTP payload. Any Profile M client can parse these fields without proprietary drivers.

2. A standard event model. Detection and rule-engine events fire over the ONVIF Events service (XML over SOAP, pull-point or base notification) and, optionally, over MQTT with JSON payloads — opening the door to IoT brokers and cloud analytics.

3. A configurable analytics service. Clients can discover which analytics modules a device runs (motion, tamper, line-crossing, people counting, LPR, face), tune their parameters, and subscribe to their events — all without vendor-specific firmware tools.

How Profile M relates to Profiles S, T, C, and G

Profile M never replaces a streaming profile — it layers on top. A useful camera almost always pairs Profile M with Profile T (advanced streaming, H.265, alarm handling) and frequently Profile G (edge storage). If you are doing access control with face or plate triggers, Profile C joins the stack. The rule of thumb: Profile S/T moves pixels, Profile M moves meaning, Profile C moves access decisions, Profile G moves archives.

The Profile M metadata model in 60 seconds

Every Profile M device produces a Scene Description that is structured like a tree. The root represents the scene (camera view). Each detected object is a child node with a set of mandatory and optional fields. Understanding this structure is the single highest-leverage thing a VMS engineer can do; nearly every integration bug traces back to misreading one of these fields.

Field Mandatory? Meaning Why it breaks in the field
BoundingBox Yes Rectangle in normalized coordinates (−1…1). Normalization direction varies by vendor; flip Y axis if boxes appear mirrored.
CenterOfGravity Yes Single point representing object location. Some cameras only populate this for certain classes (e.g., Vehicle, not Face).
ObjectId Yes Stable tracking ID within the session. Resets on camera reboot; re-identification across cameras is out of scope.
Class/Type Optional Human, Vehicle, Animal, Face, LicensePlate, Bag… Taxonomies differ (“Person” vs “Human”); you will need a mapping table.
Appearance Optional Color, size, face descriptor, plate text, vehicle make. Rich fields most often missing; verify per-DoC before relying on them.
GeoLocation Optional Lat/lon/alt, often derived from PTZ calibration. Requires calibrated PTZ / fixed scene; uncalibrated output is nearly useless.
Confidence Optional Detector score 0…1. Some cameras output 1.0 for every detection; treat it skeptically until validated.

Reach for Profile M metadata when: you need to search, replay, or alert on objects across a multi-vendor camera fleet without writing N adapters. Skip it when all cameras and the VMS are from the same manufacturer — the native SDK will expose more fields with less ceremony.

Events, MQTT, and the IoT bridge

Profile M inherits ONVIF’s event service (XML over SOAP, pull-point or base notification) and adds an optional MQTT binding with JSON payloads. The MQTT bridge is the piece most projects actually care about: it turns a camera into a first-class IoT device that any broker (HiveMQ, Mosquitto, AWS IoT Core, Azure IoT Hub, Google Cloud IoT) can ingest.

A typical deployment looks like this

1. Camera (analytics on edge). Detects objects, applies rules (line crossing, loitering, counting), publishes JSON events to an MQTT topic such as site/building-a/cam-17/events/line-crossing.

2. Broker. Local Mosquitto for low-latency alerting, or a cloud broker (AWS IoT Core) for multi-site aggregation. TLS and client certificates are non-negotiable for production.

3. Consumers. The VMS subscribes for visual search and archive indexing. A business intelligence service subscribes for retail counting. An access control system subscribes for LPR plate triggers. None of them needs to know anything about the underlying camera vendor.

4. Event storm guardrails. Set the confidence threshold no lower than 0.7 on the camera, aggregate by object ID at the publisher, and use broker-side topic filtering. A busy urban camera can emit 30–100 events per second at default thresholds, and we have seen unbatched streams saturate a $50/month cloud broker plan in under an hour.

Reach for MQTT + Profile M when: your analytics needs multiple consumers (VMS, BI, access control, SCADA) and/or cloud aggregation. Stay on plain ONVIF events when you have a single VMS consumer inside the same LAN.

Designing an MQTT event pipeline?

We have shipped MQTT – ONVIF bridges for retail, smart-city, and industrial clients. Bring your topology and we will tell you where it will break first.

Book a 30-min scoping call → WhatsApp → Email us →

How to read a Declaration of Conformance in five minutes

“Profile M compliant” on a spec sheet is a marketing claim; the DoC is the contract. Every ONVIF-certified product has a DoC PDF listed at onvif.org/conformant-products. The PDF lists every tested feature, mandatory and optional, with a tick for each one actually implemented by the product and firmware version you are buying.

The 8-point DoC checklist

1. Firmware version. Conformance is granted to a specific firmware build. Upgrading can (and sometimes does) break compliance. Lock firmware in your deployment bill of materials.

2. Metadata streaming. Always mandatory. Verify the stream URL pattern and RTP port range.

3. Analytics service. Always mandatory. Confirm the WSDL endpoint and the list of exposed analytics modules.

4. MQTT events. Optional. If you need IoT integration, this ticket must be ticked. Check for TLS, QoS 1/2, and authentication mode.

5. Rule engine. Optional. Line crossing, loitering, tailgating, counting — only here if this is ticked. Otherwise the camera emits raw detections and you build rules downstream.

6. Object classes. Optional. Map the listed classes to your canonical taxonomy. A camera with only Human + Vehicle will not give you LPR or face metadata.

7. Geolocation. Optional. Usually tied to PTZ calibration or preset fields of view.

8. Authentication. Digest mandatory; WS-Security and HTTPS optional but strongly preferred. Skip any device without HTTPS for a 2026 deployment.

Who actually ships Profile M in 2026

Profile M landed in 2021; adoption has been steady but uneven. Enterprise camera brands moved first; mid-tier and budget brands are still catching up. Our fleet observations across client deployments look roughly like this:

Tier Representative brands Profile M status Integration note
Enterprise IP cameras Axis, Bosch, Dallmeier, Hanwha, Pelco Broad; select model families certified. Rich metadata, MQTT on most 2023+ firmware.
VMS platforms Milestone XProtect, Genetec, Avigilon Consumption ahead of production. Milestone certified first (2022); others partial.
Mid-tier IP cameras Vivotek, Lorex, i-PRO Partial, many models. DoC varies widely; validate per model.
Budget / high-volume Hikvision, Dahua, Reolink, Uniview Rare; mostly proprietary. Expect ISAPI/SDK integration, not Profile M.
Cloud / VSaaS Eagle Eye, Verkada, Spot AI Partial, via ONVIF gateway. Usually exposes Profile M upstream of proprietary clouds.

Bottom line: in any mixed fleet that includes Hikvision or Dahua, you still need a proprietary SDK path for at least part of the system. Our AI-powered IP camera trends piece goes deeper on the SoC roadmap behind these differences.

How many people or plates can a Profile M camera actually recognize?

This is the most common question buyers ask, and it has nothing to do with Profile M. Profile M standardizes how detections travel. Capacity is determined by five hardware and scene factors:

1. Sensor resolution and pixels-on-target

For reliable face detection, a face needs roughly 80–120 pixels between the eyes. A license plate needs 150–180 pixels wide. A 4K (8 MP) sensor with a 60° field of view can keep 3–5 faces on-target at 10 m; a 2 MP sensor at the same FOV struggles past one. The physics do not care what metadata protocol is used.

2. Analytics SoC horsepower

Axis ARTPEC-9, Ambarella CV25/CV52, Hikvision ACUSENSE processors, and NVIDIA Jetson Orin Nano all ship in Profile M cameras. Low-end SoCs cap out at 20–40 simultaneous detections; high-end ones exceed 200. The DoC will not tell you this — check the product datasheet, or ask the vendor.

3. Frame rate and motion

Vehicles at 60 km/h traverse a typical FOV in under a second. At 15 fps the plate is blurred in most frames; at 30–60 fps with a fast electronic shutter you get usable ALPR reads. Frame rate and shutter speed trade against low-light sensitivity — the vendor datasheet will spell this out.

4. WDR, IR and low-light behavior

Shadowed entrances and backlit doorways kill face detection faster than any code. 120 dB WDR, starlight sensors, and IR illumination are the usual fixes. See our write-up on real-time video processing with AI for scene-level recommendations.

5. Confidence threshold

Every analytics module is configurable. Lower the threshold and capacity climbs — with a matching rise in false positives and MQTT load. For retail heatmaps we typically run at 0.55; for access control at 0.85; for LPR plate text at 0.80 after cross-validation.

Reference architecture for a Profile M-native VMS

This is the blueprint we use when a client asks Fora Soft for a VMS that is Profile M-native from day one. It is deliberately simple; the complexity lives in the adapters.

1. Edge layer. Profile M cameras (Axis, Bosch, Dallmeier) publish RTP metadata and MQTT events. For non-Profile-M cameras (Hikvision, Dahua) run a thin analytics integration gateway that translates proprietary SDKs into Profile M JSON.

2. Broker. Mosquitto cluster on 2 Hetzner AX-52 servers for in-region deployments; AWS IoT Core for multi-region clients. TLS 1.3 plus client certs.

3. Metadata store. TimescaleDB (on top of PostgreSQL) for time-series event indexing; S3 / Backblaze B2 for ONVIF Scene Description XML blobs keyed by camera ID and timestamp. Retention 30–180 days on hot storage depending on vertical.

4. VMS core. Stream manager (GStreamer + Janus) for video; event router for metadata. Web client streams video via WebRTC; metadata via WebSocket overlays.

5. Analytics bus. Kafka or NATS internally; downstream consumers are the search service, alerting service, and BI pipelines.

6. Clients. Web, iOS, Android; see our Android SDK for video surveillance breakdown for the mobile side.

Reach for a Profile M-native VMS when: your roadmap includes at least two camera vendors and downstream BI or SCADA consumers. Skip it if you are locked in with one vendor and only the security team will ever look at the stream.

Five production use cases Profile M actually unlocks

License plate recognition at the gate

Profile M emits a LicensePlate object with bounding box, plate text (optional), and a confidence score. A dual-camera setup (one for scene context, one with LPR-tuned optics) is common. We typically cross-validate plate reads against a secondary ALPR server because optional plate-text fields do not carry guarantees on OCR accuracy.

Building access control with face triggers

Profile M does detection, not 1:N face recognition. The common pattern is: camera detects a Face and exports the crop via ONVIF or RTSP snapshot, a downstream face recognition service (in-house or a NIST-FRVT vendor) resolves the identity, Profile C opens the door. Keeping the face template DB outside the camera is both a privacy and a maintainability win.

Retail heatmaps and queue analytics

Object IDs + timestamps + bounding boxes are enough to build dwell-time heatmaps, aisle-to-aisle funnels, and queue length graphs. MQTT JSON plays directly into BI tooling (Grafana, Looker). For a deeper look see our piece on retail video analytics.

Perimeter and intrusion detection

Line-crossing and zone-intrusion rules in the camera’s rule engine, published as Profile M events, feed a central alerting service. Combined with classical AI-based anomaly detection, this covers 80% of typical perimeter workflows.

Industrial safety compliance

PPE detection (hard hats, hi-vis vests, gloves) is increasingly shipped as a Profile M analytics module on industrial-grade cameras. We have built this for construction and oil and gas clients; a read-through of hard-hat detection in video surveillance and machine-learning algorithms for surveillance anomalies captures the ML-side decisions.

Mini case — multi-vendor fleet, one Profile M backplane

Situation. A regional physical-security integrator came to us with a 240-camera estate spread across three office campuses: Axis P-series on the perimeter, Bosch Flexidomes indoors, and a legacy rack of 60 Hikvision DS-series that had to live out its warranty. Three SDKs, three event models, and a monitoring team that was about to hire two extra operators just to babysit the dashboards.

The 10-week plan. Week 1–2: DoC audit of every Axis and Bosch model, Hikvision triage (no Profile M). Week 3–5: Profile M consumer for Axis/Bosch plus an ISAPI→Profile M gateway for the Hikvision rack. Week 6–7: MQTT broker (Mosquitto cluster on Hetzner), Timescale metadata store, search API. Week 8–9: operator console integration, NTP governance, alert rules. Week 10: acceptance, load test, runbooks.

Outcome. End-to-end alert latency dropped from ~3.2 s to under 900 ms, 30-day metadata search queries returned in < 1 s p95, and the ops team stopped maintaining three separate vendor plugins. No extra operator headcount was needed. For similar multi-vendor fleet stories, see our scalable VMS engineering decisions piece.

Tooling that makes Profile M work in practice

ONVIF Device Manager. Free Windows tool, still the fastest way to sanity-check a new camera. Will show you the metadata stream, the analytics service, and the event topics without a line of code.

gSOAP + onvif-sdk. For a production C/C++ consumer, generate your WSDL bindings with gSOAP; Python shops usually reach for the onvif-zeep library. Expect to maintain a local copy of the WSDLs — online DTD resolution is brittle.

onvif2mqtt / onvif-mqtt. Open-source bridges that translate ONVIF XML events to MQTT JSON. Good as a reference; bring your own error handling and auth before putting it near production.

MQTT Explorer. GUI MQTT client for inspecting topics and payloads live. Essential when you are debugging the event-storm moment.

ONVIF conformance test tool. Vendor-only, but if you are building a device you will live in it. ONVIF reference implementations (Happytime ONVIF Server, for example) are useful for test benches.

Wireshark + RTP dissectors. When Scene Description XML looks broken, the RTP payload is where the truth is. Keep a tcpdump runbook handy.

Five integration challenges Profile M does not fix

1. Metadata is not archived by default. Most VMS platforms record video, discard the metadata stream, and then wonder why forensic search is useless a week later. Configure parallel metadata archival (we typically use JSON sidecar files plus a Timescale index) before you bring anything live.

2. Class taxonomies drift. Axis “Human”, Hanwha “Person”, Bosch “Pedestrian” — same thing, different strings. Maintain a mapping table and log unknown classes; you will want to know when a firmware update quietly introduces a new one.

3. Time sync is assumed, not enforced. Profile M does not mandate NTP. A 900 ms drift between camera and VMS makes metadata look like it belongs to a different frame, and investigators start doubting the system. Enforce NTP with <100 ms target accuracy and alert on drift >500 ms.

4. Optional features silently differ. Two cameras from the same vendor on the same line can ship different optional features depending on firmware SKU. Always test the DoC-declared feature set on the exact hardware revision, not a demo unit.

5. Rule engines are not interchangeable. Line-crossing rules configured on an Axis camera are not portable to a Bosch camera — Profile M standardizes the event output, not the rule authoring format. Build your configuration in a vendor-agnostic model and compile down to each camera’s rule API.

What a Profile M VMS really costs to integrate

These are the engineering estimates we give clients before kickoff. Numbers assume our Agent Engineering-accelerated workflow; non-accelerated teams should expect 40–60% more hours.

Workstream Fora Soft hours Scope
Profile M consumer (discovery, auth, WSDL) 80–130 ONVIF Device Manager integration, metadata stream parsing, analytics service configuration.
MQTT bridge + schema mapping 30–60 Broker config, TLS, topic taxonomy, JSON mapping, deduplication.
Metadata archive + search 60–100 TimescaleDB schema, S3 blob store, REST search API.
NTP + calibration tooling 20–40 Drift monitoring, bounding-box calibration UI, geofence editor.
Proprietary-SDK adapters (per brand) 40–80 Hikvision ISAPI / Dahua SDK / Axis VAPIX bridged into Profile M.
Total for a production Profile M VMS ~230–410 Excludes general VMS features (video, user, storage, reports).

Typical camera hardware cost bands we see on BOMs: $400–$800 for entry-level Profile M (basic detection, no MQTT), $900–$1,800 mid-tier (face/plate + MQTT), $2,000–$4,000+ for high-resolution analytics cameras with WDR, IR, and on-device rule engines.

Need a tight estimate for your Profile M VMS?

Send us your camera shortlist, target architecture, and deployment size. We will come back with a fixed-scope Phase 0 in 2–3 business days.

Book an estimate call → WhatsApp → Email us →

A decision framework — when to mandate Profile M in five questions

Q1. Will the camera fleet be multi-vendor? If yes, Profile M is a must. If you are single-vendor for the foreseeable future, the native SDK usually exposes more features faster.

Q2. Will multiple systems consume the same analytics stream? VMS + BI + access control + SCADA = Profile M + MQTT. Single VMS only = Profile M events or even proprietary events is fine.

Q3. Is the feature set flexible? If you need face recognition, Profile M alone is not enough — you will add a recognition service. If you need only detection + events, Profile M covers the use case cleanly.

Q4. Do you have integrators comfortable with SOAP / gSOAP / MQTT / RTP? These are not modern stacks. If the team is JavaScript-only, budget extra hours for the on-ramp.

Q5. What happens at firmware upgrade? If you have a tight change-control regime (good), Profile M conformance locking is easy. If firmware is upgraded ad-hoc by field techs (common), plan for a staging rig that revalidates DoC features before rollout.

Three or more “yes” answers — adopt Profile M. Fewer — stick with a vendor-native approach and re-evaluate at the next hardware refresh.

Five pitfalls that sink Profile M projects

1. Treating the Profile M badge as a feature list. The badge says “we passed a conformance test with a specific firmware”. It does not say the camera supports plate OCR, MQTT, or geolocation. Always read the DoC.

2. Under-specifying confidence thresholds. Leaving defaults on in a busy scene floods the broker, the store, and the alerting UI. Set class-specific thresholds and monitor the event rate from day one.

3. Forgetting the metadata archive. Six months into production, the SOC asks “what was in that zone at 02:14 last Tuesday?” and gets video with no metadata overlay. Design the archive before go-live, not after.

4. No NTP governance. Every camera and every server on its own NTP policy = clock drift. Standardize on a single stratum-1 source and alert on drift.

5. No plan for non-conformant cameras. In any real deployment with more than 20 cameras, at least one will be non-conformant or off-spec. Build the gateway layer up front instead of scrambling later.

KPIs: what to measure on a Profile M pipeline

Quality KPIs. Recall ≥ 0.85 on object classes in scope; precision ≥ 0.80; class confusion rate ≤ 5%. Validate quarterly with labelled test sets per site, not once at commissioning.

Business KPIs. End-to-end alert latency (camera detection → operator console) < 1 s; search query latency over 30-day metadata < 2 s at p95; false-alarm acknowledge rate in the SOC < 3%.

Reliability KPIs. Metadata stream uptime ≥ 99.5% per camera per month; NTP drift < 100 ms p99; MQTT broker throughput headroom ≥ 30% above peak observed.

When Profile M is the wrong answer

Profile M is not free. It adds a WSDL learning curve, a metadata archive, a class-mapping service, and an operations burden (DoC governance, NTP). Skip it when:

Single-vendor deployment. A Milestone + Axis or Genetec + Hanwha setup gets more features out of the native driver. Profile M is insurance for vendor flexibility.

Sub-10 camera projects. The overhead of Profile M plumbing does not amortize over small sites. Stick with RTSP + events and a single analytics container.

Consumer-grade or “cloud-camera” products. If you are building a product where the camera is bundled with a cloud backend (think Ring or Nest territory), Profile M adds rigor you will not use; a proprietary protocol is lighter.

Ultra-low-latency alerting. For sub-100 ms alerting (trading floor surveillance, perimeter intrusion at critical facilities), direct WebRTC + in-band alarm metadata can beat the Profile M path. Use Profile M for archive and BI, not the hot path.

Reach for a proprietary SDK when: you need a specific advanced feature (best-in-class LPR, face search, behaviour analytics) that only one camera vendor ships, and you are ready to accept vendor lock-in on that path.

FAQ

Is “ONVIF compliant” the same as “Profile M compliant”?

No. ONVIF compliance is broad; it can mean Profile S (streaming), G (storage), T (advanced streaming), C (access control), M (metadata), D (door), or A (configuration). Profile M compliance is a specific subset for metadata and analytics. A camera can be ONVIF compliant without ever passing Profile M.

Does Profile M require Profile T or Profile S?

Profile M itself does not require a streaming profile, but in practice, every camera you would actually buy ships Profile M together with either Profile T or Profile S so that a client can pull video and metadata with the same toolkit. The ONVIF spec treats Profile M as layered on top of whatever streaming profile the device implements.

Is MQTT mandatory in Profile M?

No. MQTT is an optional binding for ONVIF events in Profile M. The mandatory event path is the ONVIF Events service over SOAP. Check the DoC for MQTT support before you architect an IoT pipeline around it.

Can I do facial recognition with Profile M alone?

Only detection, not 1:N recognition. Profile M can carry a face bounding box and, optionally, an opaque face descriptor. Turning that into an identity match requires a separate face-recognition service and template database. That is almost always how you want it anyway, for privacy and maintainability.

How do I know if a specific camera supports MQTT under Profile M?

Download the Declaration of Conformance from onvif.org/conformant-products and look for the MQTT event interface row. If it is ticked, the camera passed the conformance test for MQTT; if not, the feature is not guaranteed even if the marketing sheet lists “IoT ready”.

Does Profile M guarantee LPR plate text accuracy?

No. The plate-text attribute is optional, and the spec only standardizes how to carry the text and a confidence value — not how accurate the OCR is. OCR quality depends on the camera’s analytics module, resolution, angle, and lighting. For production ALPR, validate independently.

Does my VMS automatically archive Profile M metadata?

Most VMS platforms do not. They record video and sometimes index key events, but the full Scene Description metadata is discarded by default. If forensic search is in scope, configure a sidecar metadata store (we use TimescaleDB plus S3-compatible object storage for scene-description blobs).

How many cameras can one Profile M consumer handle?

In our deployments, a single consumer process on a modest VM (4 vCPU, 8 GB RAM) handles 60–120 cameras at typical event rates (< 20 events/s/camera). Past that, shard by camera ID across workers, and let the broker do the fan-out. Profile M event volume scales linearly with camera count and confidence thresholds.

Does upgrading firmware break Profile M conformance?

It can. Conformance is granted to a specific firmware build. Most vendors retest on major releases, but occasional regressions happen. Treat firmware upgrades like any other production change: test on a staging rig against your own ingestion pipeline, then roll forward.

ONVIF deep dive

ONVIF Profiles in Security Systems

The companion guide to S, G, T, C, D, A — how each profile slots into a modern VMS.

Analytics

Object Recognition Camera Solutions with ML

Why the ML pipeline matters more than the protocol for detection accuracy.

IP camera trends

AI-Powered IP Cameras: Trends to Watch

The SoC and edge-AI roadmap that is accelerating Profile M adoption.

VMS design

12 Essential Features of a Modern VMS

Where Profile M support fits into the wider VMS feature matrix in 2026.

Ready to build Profile M into your product?

ONVIF Profile M gives multi-vendor surveillance systems something that took nearly a decade to materialize: a common language for detections and analytics events. The badge alone is not a strategy — the real work is reading DoCs correctly, specifying cameras by hardware and scene requirements, building a metadata archive, and keeping MQTT event storms in check. Done well, Profile M cuts integration from months to weeks and keeps the door open to every downstream analytics consumer you have not thought of yet.

Done badly, it becomes another line item that did not deliver. If you want to skip the learning curve, Fora Soft has already shipped the gateway, the consumer, the archive and the operational playbook across retail, smart-city, industrial, and enterprise security projects.

Let’s turn Profile M into a product advantage

Bring your camera list, your integration goals, and any proprietary SDK pain you are living with. We will map the shortest path to a Profile M-native VMS.

Book a 30-min call → WhatsApp → Email us →

  • Development
    Technologies