Skip to main content

Bulk File System (BFS)

The BFS Connector is useful for outputting files and their metadata as separate entities. File binaries will be output with their metadata in a file with the format. It can also read in these files for export to other systems

[filename].metadata.properties.xml

Authentication Connection

note

There is no Authentication connection needed to set up Bulk File System Integration Connections


Discovery Connector

note

There is no Discovery Connector for the Bulk File System integration


Integration Connection

Most Integration Connections can act in both repository (read) and output (write) modes. If it can't, it will not appear as an option when creating or editing a job. This connection can only be used as a repository connection. Click here for more information on setting up an integration connection.

Integration Connection Fields

  • Connection Name: This is a unique name given to the connector instance upon creation.
  • Description: A description of the connection.

Job Configuration

Job Configuration Fields

  • Source Directory: The directory to begin crawling for bfs files.
  • Do not convert metadata keys to lowercase: Simflofy converts all type and field values to lowercase by default. If this is checked all fields will keep their original case
  • Process Folders: Tells the job to process to folders. If checked and the job is rerun for errors, folders will be processed again.
  • Process Files: Tells the job to process files. Check by default
  • Check for multi-valued fields: Will check for commas in field values. If present, they will be added as multivalued fields to the metadata

Version History Files

The import tool also supports loading a version history for each file. To do this, create a file with the same name as the main file, but append it with a v# extension. For example:

IMG_1967.jpg.v1 <- version 1 content IMG_1967.jpg.v2 <- version 2 content IMG_1967.jpg <- "head" (latest) revision of the content This also applies to metadata files if you want to capture metadata history as well. For example:

IMG_1967.jpg.metadata.properties.xml.v1   <- version 1 metadata
IMG_1967.jpg.metadata.properties.xml.v2 <- version 2 metadata
IMG_1967.jpg.metadata.properties.xml <- "head" (latest) revision of the metadata

Additional notes on version history loading:

You can’t create a new node based on a version history only. You must have a head revision of the file. Version numbers do not have to be contiguous. You can number your version files however you want, provided you use whole numbers (integers). The version numbers in your version files won’t be used in Content Services. The version numbers in Content Services will be contiguous, starting at 1.0 and increasing by 1.0 for every version (so 1.0, 2.0, 3.0, and so on). Content Services doesn’t allow version labels to be set to arbitrary values, and the bulk import doesn’t provide any way to specify whether a given version should have a major or minor increment. Each version can contain a content update, a metadata update or both. You are not limited to updating everything for every version. If not included in a version, the prior version’s content or metadata will remain in place for the next version. The following example shows all possible combinations of content, metadata, and version files:

IMG_1967.jpg.v1                           <- version 1 content
IMG_1967.jpg.metadata.properties.xml.v1 <- version 1 metadata
IMG_1967.jpg.v2 <- version 2 content
IMG_1967.jpg.metadata.properties.xml.v2 <- version 2 metadata
IMG_1967.jpg.v3 <- version 3 content (content only version)
IMG_1967.jpg.metadata.properties.xml.v4 <- version 4 metadata (metadata only version)
IMG_1967.jpg.metadata.properties.xml <- "head" (latest) revision of the metadata
IMG_1967.jpg <- "head" (latest) revision of the content