crm-notes

book-body

Notes On CRM - SugarCRM, SuiteCRM, etc

Contents

local

SuiteCRM Architecture

Front-End Framework

Our clients are based on a front-end framework called Sidecar. Sidecar is built on open source technology: Backbone.js, jQuery, Handlebars.js, and Bootstraes modern, single-page client architecture. Sugar clients connect to Sugar server application via our client REST API. The REST API is implemented in PHP and drives server-side business logic and interacts with a database. If it can be accomplished via one of our clients, then its equivalent functionality can be accomplished using our REST API.

Modules

The Sugar platform uses modules. Modules are a vertically integrated application component that is traditionally organized around a single feature or record type (or underlying database table). For example, contact records are managed via a Contacts module that contains all the business logic, front-end interface definitions, REST APIs, data schema, and relationships with other modules.

Custom modules can be created and deployed as needed in order to add new features to a Sugar application instance.

Metadata

Sugar's modules are defined primarily using Metadata. There are two types of metadata definitions within Sugar: Vardefs, which define the data model for Sugar modules; and Viewdefs, which define the user interface components that are used with a module.

Beyond metadata, Sugar is highly customizable and includes an extensive Extensions Framework.

Soft Deletes

Sugar only soft deletes records when a user deletes a record and over time, this will cause performance degradation if these records are not removed from the database.

Disabling Logichooks and Workflows during initial data Import

  • When data is being migrated into Sugar, logic hooks may be adding unnecessary time to your API requests.
  • Similarly better to disable workflow during initial import. (using Admin > Workflows interface)

Queuing Data in the Job Queue

  • There is REST API call to queue the data into job queue which will later be processed by cron job.

Role Management

  • Roles in Sugar determine which users can access modules as well as which operations the users are allowed to perform within those modules.
  • When configuring role settings in Sugar, an administrator may choose to enforce restrictions for entire modules or for selected fields on the records in a module. To fully leverage Sugar's security model, both permission types can be used together for a given role and should be considered in conjunction with the Team Management settings used by your organization.
  • Administrators have the ability to create different roles and assign regular users to the roles. A user can be related to any number of roles, and when multiple roles or role settings are applied, Sugar adheres to a most-restrictive policy to determine the user's appropriate access levels.

Main Tables

Table Name       Description
--------------------------------------------
Accounts         Customer Company Info. Related:  Cases, Contacts, Contracts, 
Contacts         Customer Contact person name, etc. Related: Accounts, Project, Contract, Quotes. 
                 Optional associations: assigned_user_id, team_id


Products         Not clear. My company products that we sell or consume ? Meta info or Product sold to one customer(account
Campaigns        Promotion activity. Related to period, cost, etc 
Workflow         Has base_module. Related tables: workflow alert/action/trigger shells. 

fields_meta_data Contains definition of custom module field names and types.

The id is char(36). For max compatibility ??? really ? Auto increment fields are rarely used. e.g. case_number.

The custom tables are named <modulename>_cstm suffix. The custom field names ends with c suffix.

Large Inserts

Use Faker library to generate data :

$faker = Faker\Factory::create();
$faker->company, firstName, lastName etc returns random words.
// See https://github.com/fzaninotto/Faker

 $outAccounts = 'accounts.csv';
 $outContacts = 'contacts.csv';
 $outAccountsContacts = 'accountsContacts.csv';
 $outAccountsHandle = fopen($outAccounts,'a');
 $outContactsHandle = fopen($outContacts,'a');
 $outAccountsContactsHandle = fopen($outAccountsContacts,'a');
 $now = (new DateTime())->format("Y-m-d H:i:s");
 for($x = 0; $x < $accountsCount; $x++){
   if($x % 100 === 0){
     echo ".";
   }
   if($x > 0 && $x % 1000 === 0){
     echo "\n";
   }
   $accountId = getGUID();
   fputcsv($outAccountsHandle,array($accountId,$faker->company, $now, $now,1,1));
   for($y = 0; $y < $contactsCount; $y++){
     $contactId = getGUID();
     fputcsv($outContactsHandle,array($contactId,$faker->firstName, $faker->lastName, $now, $now,1,1));
     fputcsv($outAccountsContactsHandle,array(getGUID(),$contactId, $accountId, 0,$now));
   }
 }

Note fputcsv :

fputcsv($fp, $foo, "\t");   // Tab delimited
fputcsv($fp, $foo, '\t');   // Does not work since '' means literal.
int fputcsv ( resource $handle , array $fields [, string $delimiter = "," [, 
              string $enclosure = '"' [, string $escape_char = "\" ]]] )

Drop and recreate index before bulk insert to speed it up :

SHOW CREATE TABLE accounts;
ALTER TABLE accounts DROP KEY idx_accnt_id_del;
ALTER TABLE accounts DROP KEY idx_accnt_name_del;
ALTER TABLE accounts DROP KEY idx_accnt_assigned_del;
ALTER TABLE accounts DROP KEY idx_accnt_parent_id;
SHOW CREATE TABLE contacts;
ALTER TABLE contacts DROP KEY idx_cont_last_first;
ALTER TABLE contacts DROP KEY idx_contacts_del_last;;
ALTER TABLE contacts DROP KEY idx_cont_del_reports;
ALTER TABLE contacts DROP KEY idx_reports_to_id;
ALTER TABLE contacts DROP KEY idx_del_id_user;
ALTER TABLE contacts DROP KEY idx_cont_assigned;
SHOW CREATE TABLE accounts_contacts;
ALTER TABLE accounts_contacts DROP KEY idx_account_contact;
ALTER TABLE accounts_contacts DROP KEY idx_contid_del_accid;

Load Data:

LOAD DATA INFILE 'accounts.csv' INTO TABLE accounts FIELDS TERMINATED BY ',' 
     ENCLOSED BY '"' LINES TERMINATED BY '\n' (id,name,date_entered, date_modified,modified_user_id,created_by);  

-- Query OK, 40100 rows affected (4.72 sec)  
-- Records: 40100  Deleted: 0  Skipped: 0  Warnings: 0

LOAD DATA INFILE 'contacts.csv' INTO TABLE contacts FIELDS TERMINATED BY ',' 
     ENCLOSED BY '"' LINES TERMINATED BY '\n' (id,first_name,last_name,date_entered, 
                                               date_modified,modified_user_id,created_by);  
-- Query OK, 5000000 rows affected (58 min 8.44 sec)  
-- Records: 5000000  Deleted: 0  Skipped: 0  Warnings: 0

LOAD DATA INFILE 'accountsContacts.csv' INTO TABLE accounts_contacts FIELDS TERMINATED BY ',' 
        ENCLOSED BY '"' LINES TERMINATED BY '\n' (id,contact_id, account_id,deleted,date_modified);  

-- Query OK, 5000000 rows affected (32 min 55.77 sec)  
-- Records: 5000000  Deleted: 0  Skipped: 0  Warnings: 0

Recreate Index:

ALTER TABLE accounts ADD INDEX idx_accnt_id_del (id,deleted), 
    ADD INDEX idx_accnt_name_del (name,deleted), 
    ADD INDEX idx_accnt_assigned_del (deleted,assigned_user_id), 
    ADD INDEX idx_accnt_parent_id (parent_id);
--- Query OK, 0 rows affected (5.37 sec)

ALTER TABLE contacts ADD INDEX idx_cont_last_first (last_name,first_name,deleted), 
    ADD INDEX idx_contacts_del_last (deleted,last_name), 
    ADD INDEX idx_cont_del_reports (deleted,reports_to_id,last_name), 
    ADD INDEX idx_reports_to_id (reports_to_id), 
    ADD INDEX idx_del_id_user (deleted,id,assigned_user_id), 
    ADD INDEX idx_cont_assigned (assigned_user_id);
--- Query OK, 0 rows affected (15 min 41.37 sec)

ALTER TABLE contacts ADD INDEX idx_cont_last_first (last_name,first_name,deleted), 
    ADD INDEX idx_contacts_del_last (deleted,last_name), 
    ADD INDEX idx_cont_del_reports (deleted,reports_to_id,last_name), 
    ADD INDEX idx_reports_to_id (reports_to_id), 
    ADD INDEX idx_del_id_user (deleted,id,assigned_user_id), ADD INDEX idx_cont_assigned (assigned_user_id);
--- Query OK, 0 rows affected (15 min 41.37 sec)

ALTER TABLE accounts_contacts ADD INDEX idx_account_contact (account_id,contact_id), 
    ADD INDEX idx_contid_del_accid (contact_id,deleted,account_id);
--- Query OK, 0 rows affected (24 min 55.06 sec)

Another example of load data:

$ mysql --database=sakila \
    >   --show-warnings \
    >   --init-command="set sql_mode = 'STRICT_ALL_TABLES';" \
    >   -vve "load data infile '/tmp/rental.txt' into table sakila.rental \
    >   (rental_id,rental_date,inventory_id,customer_id,@return_date,staff_id,last_update) \
    >   set return_date = coalesce(@return_date,now());"

You can use IGNORE or REPLACE keyword to ignore/replace the duplicates in file:

LOAD DATA INFILE 'member.csv' 
IGNORE   -- or REPLACE
INTO TABLE tbl_member  ....

CRM Dictionary

There is no single dictionary table which contains all info. The source code structure is modified when new modules are added.

Remove the modules/<mymodules/ directories
Remove the custom/modules/<mymodules> directories

Beans Usage

Fetch Parent Record

The independent table is called parent -- dependent is child. i.e. All small primitive entities like authors, users are typically parent tables -- the tables which refer to them using foreign keys are child tables.

The examples show typically are navigating child records from the context of a parent record, 
but could you do the opposite?

//Load Contact
$bean = BeanFactory::getBean('Contacts', $id);

//If relationship is loaded
if ($bean->load_relationship('accounts'))
{
    //Fetch related beans
    $relatedBeans = $bean->accounts->getBeans();

    $parentBean = false;
    if (!empty($relatedBeans))
    {
        //order the results
        reset($relatedBeans);

        //first record in the list is the parent
        $parentBean = current($relatedBeans);
    }
}

Use of special 'email1' field of SugarBean

HOWTO: Add and retrieve email addresses in a module thru code ( bsoremsugar )

$sea = new SugarEmailAddress;   
// Grab the primary address for the given record represented by the $bean object  
$primary = $sea->getPrimaryAddress($bean);  

Performance Strategy

  • Use lazy loading
  • Use caching of Users, email-addresses, company-names, etc.