Sunday 13 November 2016

10 Things I love About Wave Analytics

If your company has huge amount of data and your top management needs actionable reports for turnover, growth, revenue etc., you would need a robust and scalable BI tool or provide these answers quickly. This is where Salesforce Wave Analytics has emerged as a crucial player.

Sample Wave Dashboard (Image source: elibrumbaugh.com)

Built on the Wave platform, Salesforce Analytics Cloud is much more than simple Business Intelligence. Organizations can build their own BI applications and make important data-driven decisions, enabling quick actions and smarter connectivity.

How is Wave different from Traditional BI?

Every BI application has a different architecture. The speed with which your BI application gives you the required information is a key factor in its usability, which in turn depends on how you get the data from source system, how you store the data, how you query the data and how you present the data on the UI.

Image source: Salesforce Wave Training

Most of the traditional BIs need to be installed in the client’s machine and store data in the same manner as a relational database would. In almost all those traditional BI tools, queries are complex and time consuming, and these BI tools are often found to be lacking in providing complex insights that is often required by the business.

Compared to traditional BI tools, Wave scores on a lot of fronts. Not only does it disseminate required information at a faster speed, it can get the data from Salesforce, CSV files and partners like Jitterbit, Informatica, Talend, Mulesoft, Boomi, Snaplogic etc. Additionally, Wave is schema free and stores the data in the form of compressed .json. To top it all, Wave is 100% mobile.

Why Wave Analytics?

A pertinent question that arises is the need for Wave Analytics if Salesforce already has inbuilt reports and dashboards. The answer lies in Wave’s capabilities that extend beyond Salesforce reports and dashboards. When compared to Wave, Salesforce dashboards have a lot of limitations, like the inability to create a dashboard using external systems data, inability to get the required insights from the data, Dashboards on Salesforce1 are read-only and are not available to edit.


Which brings me to the 10 most significant reasons why I absolutely love Wave!
  1. Search Based Technology: Wave can get the data from Salesforce, csv, external tools like Informatica, Talend, Jitterbit etc. Wave runs the data flow and wave parser converts all the incoming data into a compressed structure and stores the data in the form of json, which is only the plain text data in the form of key value pairs and a very light weight form of data. With minimal efforts, Wave can search on these key pairs and get the insights from the data.
  2. Schema free non-relational database: Unlike traditional BI tools, the Wave engine does not store the data in the form of a relational database because the database brings quite a few limitations along with it. The database could have repeated values, join operations in schema are always expensive and a linear size expansion makes it ever so difficult to be optimized for read and write operations.  A schema-free Wave stores the data in a compressed format using a key value pair.

    Image source: Salesforce Wave Training

  3. Inverted Index: Speed is a key factor for any successful BI application and it depends on multiple factors like hardware, chosen architecture, query, search operations, etc.



    An indexing technique is used in traditional BI tools. However, if you have a large amount of data and it is increasing rapidly over time, your index also gets increased. You would end up requiring indexes for the indexes, meaning if you use indexing for large datasets, it can affect the speed. Wave has implemented the concept of an inverted index. Just as a book’s last pages has the information on keywords and page numbers on which that keyword has appeared (inverted index), Wave would also store the data in the form of a key value pair.
  4. Loading External data to Analytics cloud:  When stakeholders decide on a BI tool, they consider a few aspects like the type of data within their systems and the compatibility between their systems and the BI tool.  Wave overcomes all these problems. Wave engine is intelligent enough to get the data from Salesforce as well as csv file and partners like Jitterbit, Informatica, Talend, Mulesoft, Boomi, Snaplogic etc.

    Image source: Salesforce


  5. Trusted and secure architecture: Wave being a Salesforce product essentially means that all the security measures in Salesforce hold true for Wave as well. Additionally, Wave also has App level security, field level securities, row level securities to ensure authorized access.

    Image source: Salesforce


  6. Mobile First Design: The top level management of every company needs relevant dashboards and it is additionally useful if they can access the same on the go. Wave enabling the users to create, update and edit dashboards on the fly, with a beautiful and responsive UI for some added zing and It is also available on Apple watches.

    Image source: techcrunch.com


  7. Customized business apps: There are some predesigned wave apps readily available on the AppExchange Like Sales Wave, eCommAnalytics, FinancialForce etc. User can install these wave apps, and get instant answers to their questions.
  8. Visualforce Integration: Yes, you read that right! Wave dashboard can be a part of your Visualforce page by using <wave:dashboard> attribute in the Visualforce page, enabling you to transfer the power of wave to your Visualforce page.
  9. Display Dashboards in Salesforce sObject page layout: You can display Wave dashboards within your Salesforce page layout and display the user specific records on that dashboard.
  10. Can perform actions in Salesforce: Even though Wave is altogether a different platform, it has the power to perform certain actions directly on Salesforce objects, like creation of a task or event, logging a call, creation of a case or updating records etc.
Wave has a killer UI to top it all, ensuring that you are truly able to see and realize the power of analytics that is packed into it. Analytics was never this powerful and beautiful before!



Written by Anand Shinde, Salesforce Developer at Eternus Solutions
Read More »

Monday 7 November 2016

Managing Different Perspectives of the Developer Console

Being a rookie Salesforce.com developer, I often wondered if System.debug statements were the only way to find out where my superbly written code was breaking. Much to my dismay, I was often all at sea, trying to find out the actual cause of the issue only through execution debug logs provided and without actually editing the code. Wasn’t there a better way, I thought? That’s when I stumbled upon the inbuilt features of the developer console.

The developer console basically provides a convenient set of tools for efficiently tracking down logical issues. A perspective is a predefined arrangement of panels in the developer console log inspector. The developer console has a set of tools that enables the developers to track what exactly is happening down there in the hood. These tools are grouped together by default and presented to us in the form of a perspective.

To view all the default perspectives, simply click on debug → Perspective Manager in the developer console.

Out of these predefined perspectives, one of the most helpful panel is the Execution Overview panel which is a part of Analysis perspective.

The Execution Overview panel consists of four tabs.

Save Order Tab

Let’s consider a case where you are new to a project and are unable to gauge the exact flow of execution. Alternatively, you might be familiar with the project but observing some weird changes. You might find that some fields or actions are affected only if the code is executing recursively but for some reasons you are unable to determine where and what is making the code run into recursion? Another possible scenario could be where you are unable to determine which validation rule or which workflow rules are fired or is it a custom workflow or validation that is causing an issue to occur?

For all such scenarios, the immediate solution could be to direct to the Save Order tab which is a part of Execution Overview panel.

The Save Order tab provides an overview of all the actions that are performed during a DML operation in a beautiful color-coded format. Not only is it color coded, but it also displays all the actions in a intuitive sequence diagram. You no longer need to scratch your head in determining whether the validation rule ran first or the trigger; the Save Order tab has got it all covered.

This tab follows a color code format for better and fast understanding for the end user.

It states:

Colour
Actions
Before trigger
After Trigger
Validation Rule
Assignment Rule
Workflow Rule

Once you click on the validation rules, you are directed to the actual validation rule in the execution log panel where you are able to view its actual detail.




Executed Units Tab

As a part of the Salesforce development, we might have faced a few exceptional scenarios we are usually scared of, but which need to be tackled nonetheless; such as the CPU time limit exceeded exception or the Heap size Limit exceeded exception. There might have been cases where you were unable to figure out the total number of rows affected due to the DML action.

The Executed Units tab, which is a part of the Execution Overview panel, helps you find the answers to all these queries very easily.

It displays the system resources used by each item in the process. Additionally, there are various buttons:  Method, Queries, Workflows, callouts, DML, validations, triggers, Pages; at the bottom of the tab which can be used to filter out information by item type.

In the Execution Units tab, we have following columns:

#
Column
Description
1
What The what column includes the various operations known as process items that are executed in particular execution.
The different process items that are included are as follows:
  • Method
  • Queries
  • Workflow
  • Callouts
  • DML
  • Validations
  • Triggers
  • Pages
2
Name For each of the process included above, it gives the name for it.  e.g.
If “method” is included in the “what” column the name of the method that is executed is displayed in the name column accordingly.
3
Sum If the process is executed more than once then the sum of duration for that particular execution is calculated in milliseconds.
4
Avg The Avg column displays the average duration (in milliseconds) taken for the process to be executed. It is calculated based on the formula (Sum/Count).
5
Max If the particular process is called more than once, the maximum duration (in milliseconds) taken for execution among all those executed processes is included in max column.
One of the use case where the information displayed in this column would help you, is to solve CPU time limit exceeded exception.
6
Min If the particular process is called more than once the minimum duration (in milliseconds) taken for execution among all those executed processes is included in min column.
7
Count Number of times a particular process was called during its execution.
8
Heap Amount of space the process took on the heap is specified in bytes.
e.g. Heap size limit exceeded exception would be traced much faster by observing information listed in this column.
9
Query Type Type of query. Possible values are:
  • SOQL
  • SOSL
10
Sum rows Sum of all records changed during the execution of particular process.
11
Avg rows Average number of records changed during the execution of particular process.
12
Max rows Maximum number of records changed during the execution of particular process. For eg: The Max rows count would help you to solve SOQL Limit exceeded exception by determining the exact process which has caused the exception.
13
Min rows Minimum number of records changed during the execution of particular process.



So many tools in such a small window! Magic, thy name is Developer Console! Guess what, the fairy tale doesn’t end here; there are some more unsung features that prove to be a great boon to the developers.  But that is for another day, as I shall be covering them in my next blogs. Till then, let the magic of Developer Console make life a smooth ride for you!



References :



Written by Kaajal Bhawale, Salesforce Developer at Eternus Solutions
Read More »

Tuesday 9 August 2016

How to Generate a PDF from your Visualforce Pages: Going Beyond ‘renderAs’

As a Salesforce developer, your customer might often need you to convert his Visualforce page into a PDF format. That is simple enough, right? All you need to do is to use the “renderAs” attribute on the Visualforce page and you are done! Hold on! There’s more to this than what meets the eye.

Imagine you have a similar requirement but instead, this time with a page using Lightning tools which, on one hand will save you from archaic UI, enabling you to use lots of great charting options for that perfect GUI that your client craves for, but taking away the ease of generating a PDF from the same page with a simple usage of “renderAs”. What will you do now?

In this blog post, I will take you through a simple workaround to ease your problems using jQuery library of jsPDF. I have used jsPDF, keeping all the graphs and tables inside the <div></div> tag which was later on converted into a base64 image using canvas library and then re-writing the same content in the pdf output.

Given below are the steps to follow so as to enable creation of pdf even with complex pages.
  1. Download the following jQuery files and upload them as static resources within Salesforce. Add them to your Visualforce page as well
  2. Cover your design portion (the portion that is complex with graphical charts and long tables) within div-tag
  3. Add the code given below in your apex class. The function getImgUrl enables you to generate a base64 image URL of the div portion (from step 2)
  4. The code snippet given below will add your base64 encoded image URL into your pdf file

Once you execute this code, your PDF will contain the graph and table that you wanted to include within it. Your problem ends here.

Does it?

There is still a critical scenario that can ruin all the hard work that you did above. What if the size of the image is more than the size of the PDF? Wouldn’t this approach cause the image to break and not be displayed properly in the PDF format?

Don’t worry! All is not lost.

In order to work around the above scenario, you need to know the height of your generated image. To get this, you need to convert your image into a base64 string. Now set the minimum height of your pdf page (in my example, I am setting it to 1760, normally approximated to its standard size).

Now you have the height of your pdf as well as the image. Divide the image height with that of your pdf height and the ratio you get will give you the number of pages that your content (in the form of image) intends to span across the psf. Now you need to iterate over the no. of pages and keep adding the image into your pdf. The code below will help you achieve the same.


For more details on how this works, check out this link https://pdf-generate-developer-edition.ap2.force.com/

Happy Coding..!!!

Reference Link: https://parall.ax/products/jspdf 





Written by Siddhraj Atodaria, Salesforce Developer at Eternus Solutions
Read More »

Monday 11 July 2016

Generate OAuth Authorization Token using OWIN with SharePoint

As a SharePoint developer, you are aware that SharePoint provides OAuth Authorization services to generate access token from SharePoint using its own prerequisites. However, this approach has its own limitations. We can generate same token without using OAuth Authorization provided by SharePoint. In this blog, I will take you through the process of generation of Bearer Token for a custom third party client application to authenticate and authorize with SharePoint and perform operations on SharePoint data using the generated access token.

Let's first know how a SharePoint token works

When a user signs in to SharePoint, the user's security token is validated. The token is issued by an identity provider. SharePoint supports several kinds of user authentication.  For more information on this, see Authentication, authorization, and security in SharePoint 2013.

In SharePoint 2013, we can create apps using SharePoint Add–Ins. These Add-Ins are also required to be authenticated and authorized with SharePoint. These Add-ins can be authenticated and authorized in several different ways. For more information on this, see Three authorization systems for SharePoint Add-ins.

As mentioned, for all the given authorization systems to get access tokens for logged in user, either we need to create a high trust using certificates or we need to register with Microsoft Azure Access Control Service (ACS). In both these scenarios, our custom site requires to be configured with secure access i.e. HTTPS protocol with high trust certificates.

So, the workaround to create access tokens from SharePoint site other than the options provided by Microsoft is creating a custom ASP.Net Web API using OWIN.

What is OWIN

OWIN is an Open Web Interface for .Net which acts as a middleware OAuth 2.0 authorization server between SharePoint site and a third party client application. OWIN defines a standard interface between .NET web servers and web applications.

Using ASP.Net Web API and OWIN, we can authenticate and authorize users with SharePoint site and generate access token for this user, and further use this access token for CRUD operations on SharePoint site using SharePoint REST API's by passing the "Bearer" access token in the headers of the query.

Authorization Methods in SharePoint

To perform CRUD operations on SharePoint content using SharePoint REST APIs, there are different ways to pass authorization:
  1. System.Net.CredentialCache.DefaultCredentials:
    The DefaultCredentials property applies only to NTLM, negotiate, and Kerberos-based authentication.

    DefaultCredentials represents the system credentials for the current security context in which the application is running. For a client-side application, these are usually the Windows credentials (username, password, and domain) of the user running the application. For ASP.NET applications, the default credentials are the user credentials of the logged-in user, or the user being impersonated.
  2. System.Net.NetworkCredential(username, password, domain):
    The NetworkCredential class is a base class that supplies credentials in password-based authentication schemes such as basic, digest, NTLM, and Kerberos.

    This class does not support public key-based authentication methods such as Secure Sockets Layer (SSL) client authentication.
  3. Bearer Token:
    Tokens are issued to clients by an authorization server with the approval of the resource owner. The client uses the access token to access the protected resources hosted by the resource server. This specification describes how to make protected resource requests when the OAuth access token is a bearer token.
The First option stated above cannot be used in a custom third party client application as it does not understand the default credentials. The Second option stated above passes username, password and the domain in which the user needs to be authorized, which will cause a security threat as the client application will need to store user's password and send it whenever required.

SharePoint 2013 uses OAuth 2.0 Authorization framework for Bearer Token usage in SharePoint Add-Ins. Once the access token is generated, the custom application can use this token to perform CRUD operations on SharePoint 2013 content using SharePoint REST APIs. This token is sent through headers from the code that is running on a browser client. You will not need access token if you are making this call from a SharePoint hosted app add-in.

In a similar way, we can generate access token in ASP.Net Web API and OWIN by passing in the username and password for the first time. Once the access token is generated, we can use this token for CRUD operation for SharePoint REST APIs.

How to Generate Access Token using OWIN

Below are the steps to generate access token using OWIN
  1. Create a new empty ASP.Net Web Application Project. Select "Web API" check box under "Add folders for core references for" tab. In the Authentication, select "No Authentication".

  2. Create a class "Startup.cs" in the project at root level which will be required for OWIN.
  3. Install the required OWIN components in the solution using Nuget Package
    • Install-Package Microsoft.Owin.Host.SystemWeb
    • Install-Package Microsoft.Owin.Security
    • Install-Package Microsoft.AspNet.Identity.Owin
    The above commands will install the OWIN Hosting infrastructure as shown below

  4. Every OWIN Application has a startup class where you specify components for the application pipeline. There are different ways you can connect your startup class with the runtime, depending on the hosting model you choose (OwinHost, IIS, and IIS-Express).

    OwinStartup Attribute: This is the approach most developers will take to specify the startup class. The following attribute will set the startup class to the TestStartup class in the StartupDemo namespace.
  5. Add Configuration method with IAppBuilder parameter
  6. Configure OAuth Authorization for application which will be authenticated and authorized with a SharePoint site and Domain.
  7. Now override the methods ValidateClientAuthentication and GrantResourceOwnerCredentials as per our requirements as shown below to authenticate and authorize user from SharePoint site with User Information list in the SharePoint site.
  8. Now build and test the application by calling the GeToken method and passing "UserName" and "Password" parameters. The method will return the bearer token including token and expiry date time. This token can now be used to perform CRUD operations in SharePoint REST APIs.
Wasn’t that simple? Do try this approach and let me know how it goes for you.






Written by Mahesh Nagawade, Sharepoint Expert at Eternus Solutions
Read More »

Friday 24 June 2016

Fast Track SharePoint Add-ins development using Angular2


Integrating technologies is a nightmare for software engineers! Building a SharePoint add-in using only JavaScript/jQuery involves a lot of complexities, especially the effort required in keeping the code clean and maintainable. However, the premise of integrating AngularJS and SharePoint is quite promising and that’s our Everest to peak today!

AngularJS has helped us overcome the issues pertaining to clean code and its maintainability for quite some time now. We also used it to integrate with our SharePoint Add-ins and it worked like a charm, helping us reduce a lot of development efforts. One of the best things that it provided was the ability to write modular code.

I heard about the new version Angular2 which is due to be released and was quite inquisitive about what was new in it and how I could use it to build better SharePoint applications. I started off by creating a sample application. While developing it, I thought of integrating it with a SharePoint hosted Add-in. This is where all my problems began! An app which runs with "npm start" command won't run with SharePoint.While struggling to make this work, I discovered this solution.

Step 1:

The first step is to create a new SharePoint hosted add-in in Visual Studio and then remove jQuery using NuGget package manager.

Remove jQuery using NuGet package manager


Step 2:

Create a new NPM configuration file “package.json” with dependencies as

npm dependencies


And devDependencies as
npm dependencies


Package.json will be used to download and install the required packages to run our Angular2 App. Packages under dependencies are considered crucial for the application to run and packages under devDependencies are considered vital for development which can be excluded while installing on production environment.

Now create a new TypeScript JSON Configuration file “tsconfig.json” and TypeScript Definition file “typings.json”.

tsconfig.json


typings.json


Now you can delete the folders that we won’t be using like Scripts, Images, and Content. Move default.aspx in the root directory and delete Pages folder as well. Create a new module and name it “app”. So our folder structure would look as shown below:

Folder Structure


Now install all node modules using command “npm install”

Step 3:

We now need to create a component in “app” module with name “app.component.ts”.

app.component.ts

Create a new file “app/main.ts”.

main.ts


Step 4:

Now we will configure our app for Webpack. Webpack is a very powerful bundler. Basically  it bundles the JavaScript files together and serves it to the client as a single response. Webpack will search for all the “import” statements in our application and then it creates the bundles along with its dependencies.

To use Webpack we will create a file “Webpack.config.js” and configure it to create bundles app.bundle.js and vendor.bundle.js in "dist/" folder.

webpack.config.js


Add a typescript file in app module with name “vendor.ts”

vendor.ts


Step 5:

Now before running the application, run the following command to generate the required JavaScript files.

webpack -config webpack.config.js -progress –colors

and then include the folder “dist” in your project. To do this, click on Show all files and then include the folder “dist”.

Now your solution will look like this:
Final Folder Structure


Now add references in default.aspx.

Default.aspx


Step 6:

Now we can run the SharePoint Add-in


Wasn't that easy? We can  use Angular2 to develop a SharePoint hosted add-ins with reusable and loosely coupled components. 


Reference

You can read more about Webpack with angular2 in angular2's developer guide (https://angular.io/docs/ts/latest/guide/webpack.html) or Webpack's official site (https://webpack.github.io/).





Written by Manish Patil, Angular JS Expert at Eternus Solutions
Read More »

Tuesday 21 June 2016

Grid formatting in Microsoft Dynamics CRM Online

One of the most common enhancement requests that we receive from our MS Dynamics CRM customers is changing the look and feel of the default UI / UX. A case in consideration is the default UX of the grid. It is completely unreadable, considering it has no alternative row styling or for that matter, even row separators.

Users these days are spoilt for choices and in such a scenario, an unfriendly UI may lead to customer dissatisfaction in an otherwise perfect implementation. What is even more appalling is that there is no straightforward way of implementing this simplistic feature which is de facto for more than a decade now.

However, inspired by  Bohnnie Maity’s blog on conditional formatting for CRM 2013 sp1/2015 grids using actions, I have made changes in the code to make the grid a little captivating.

This is the standard grid available in Dynamics CRM 2015/2016

Standard grid in MS Dynamics CRM – 2015/2016




After modifying the CSS classes in JavaScript, the Grid looks a tad different, more attractive and easier to analyze records in the associated rows.

Grid with gray colored alternative rows


For achieving this output, use the following JavaScript code,



Pro-Tip


You can call this JavaScript function by simply creating a hidden button on every (Global) entity form as a web resource. For your ease, kindly use ribbon workbench tool inside MS Dynamics CRM solution.




Written by Pramod Dhokane, Microsoft Dynamics Expert at Eternus Solutions

Read More »

Friday 17 June 2016

MS Dynamics – not just CRM but xRM



Microsoft has really put its weight behind the development of the CRM platform and its success is pretty evident with the fact that the platform has evolved exceptionally over time. Now that the Microsoft’s development team has delivered and done one heck of a job at that, it’s time for the marketing team to match that level.



They can simply start by re-branding the product as xRM. Confused? Read on.

The product is still marketed primarily as a CRM tool. Customer Relationship Management (CRM) is an approach for managing a company's interaction with its existing and prospective customers.

When you look at the current features of the platform, CRM is almost a misnomer, since it is capable of so much more!  It is now ready to be used as a fully functional, independent development platform, quite justifiably so.
  • Highly scalable, secure and robust architecture
  • Extremely flexible security model
  • Highly customizable user interface development using web resources
  • Powerful built-in workflow engine for business process automation
  • Ability to expose and consume web services for programmatic manipulation of data and integration with legacy systems
  • Seamless integration with SharePoint, Outlook, MS productivity suite, MS Social Insights, ADX Studios portal
  • Availability of a large number of third-party components
  • Multiple deployment options, including On-Premise, Online, and Partner Hosted

The possibilities that this platform comes up with are endless with its capabilities. However, in order to completely utilize the power of this platform, it would be wise if it is used to do what it already does best- MANAGE RELATIONS.

If you have been to a B-school, you will know and agree that those countless case studies eventually boil down to ONE thing- Managing and nurturing BUSINESS RELATIONS, not  just with your customers, but suppliers, employees, stakeholders, general public. Great businesses are built on leveraging their knowledge of customers, suppliers, stakeholders, and exerting their influence on them. The faster you are able to analyze their sentiment, the faster you will be able to act on it and gain an edge over your competitors.

Unfortunately, there isn’t a complete solution out there which lets you manage relations. Granted, there are name sake modules for those in ERP but they are more transactional in nature and are grossly incapable for the intended purpose.

MS Dynamics CRM can easily be extended into xRM. An xRM should be a generic relationship management solution which should have a 360 degree approach in managing customers, suppliers, public relations, and just about any relationships the company wishes to manage.


Let’s explore a few use cases where standard CRM functionality can be easily extended to xRM.

 

Public Relations Management

Public Relations Management is extremely crucial in today’s era and United Airlines sure can vouch for it. A PR disaster cost them a whopping 180 million dollars (look up ‘united breaks guitars case’). People’s satisfaction level doesn’t reach you directly anymore, they broadcast it on social media.

Social Insights from Microsoft Social Engagement allows businesses to analyze and measure their brand’s perception using social media trends. The tool also provides granularity and allows users to drill down into the data and see what the users have said. By consolidating the details from Dynamics CRM or Dynamics Marketing, you can find out what you’re doing right, and address potential issues before bigger problems arise.

Supply Chain - Forecasting

One problem plaguing the manufacturing sector is not being able to manage its supply chain optimally. Raw materials usually have a usually long lead time while the orders are placed / modified at a moment’s notice. This can translate to either of these scenarios:
  • You over stock raw materials, which is very risky in a volatile price market
  • You lose out on orders because you cannot fulfill them

This problem is only compounded further due to the lack of right talent / knowledge.  Employees responsible for ordering / managing supply chain are barely proficient in advanced forecasting models and tools.

One solution to this problem is to utilize the rich information in CRM of the expected pipeline; which when combined with brilliant analytical capabilities of the platform can be used to build automated forecasting models.

Donor Management (NGO / NPO)

It’s intriguing to see how the requirements for NGOs and NPOs can be mapped to standard CRM OOB (Out of the box) entities.
  • ‘Donor profile management’ maps to ‘Customer Profile Management’.
  • ‘Donation management’ maps to Leads and ‘Opportunities Management’.
  • ‘Campaign / fund raising management’ maps to ‘Campaign Management’.
  • If there is even further additional information that you need to capture then they can be easily built using custom entities and web resources.

 

Recruitment Management

You can provide a portal to publish job openings and accept applications using ADX Studios portal. ADX Studios was recently acquired by Microsoft and has seamless integration and access to all CRM entities.

Each application can then be mapped to the standard sales and opportunities processes of verification, multi stage qualification, multi stage evaluation and quotation (offers).

These were just a few examples of what can be achieved using MS Dynamics CRM platform. Move over CRM, xRM is here!





Written by Prasad Udupi, Microsoft Dynamics Expert at Eternus Solutions
Read More »

Wednesday 15 June 2016

Office 365 – A new collaboration Platform for Non-profits..!! Are you ready to move to Cloud?

Do you want to free yourself from the hassles of maintaining an in-house Exchange Server? Is your staff spread across several locations? Do you have budget constraints for IT support like installations and upgrades? Then Microsoft Office 365 Nonprofit may be right for you!



Even today, many Non-profit organizations have been buying various software’s to fulfill their requirements. Furthermore this strategy also involves Infrastructure costs, Application Maintenance, Data management and a full-fledged IT staff to help them maintain these products. Heretofore, Microsoft has offered free or low-cost cloud-based office services that non-profit organizations are uniquely qualified for.

It is no longer the case that the best in class technology solutions are only offered to the organizations with huge budgets. Office 365 Nonprofit, makes enterprise-level technology affordable for non-profits in over 130 countries including India.

What is office 365 for Non-profits?

As part of "Technology for Good" program, Microsoft donates its cloud-based Office service (Office 365) to all qualifying non-profit organizations.

Office 365 Non-Profit is the core cloud service that offers significant benefits to non-profits which includes free email, online document editing and storage, video conferencing, instant messaging, and Yammer site, you can bring teams together from around the world.

Empower your business to succeed with Office 365 non-profit

Office 365 allows organizations to spend less time on IT maintenance by providing email maintenance, software upgrades, and security. This will help you run nonprofit more efficiently, securely, and less expensively. Non-profits can focus more on accomplishing their mission.

Compelling reasons why an Office 365 subscription is the choice for non-profits:
  • Global Access
    Access on the go from anywhere, anytime and from any device. With Office Web Apps, you can open, view, and edit office documents right from your browser giving the flexibility to work wherever and whenever you need to.
  • Brand your organization
    Market your business with online website using your own domain name, without needing a designer or paying hosting fees
  • End User Adoption
    End-user adoption is the key factor to all successful cloud services decisions. With office 365 Non-Profit, end users are provided with the latest version of traditional Office suite that your staff is already familiar with.
  • Business class email and calendaring
    With Microsoft Exchange Online, you don’t have to manage an in-house Exchange server or rely upon third party email tool. Office 365 provides users with 50GB of mailbox storage with attachments up to 25MB each.
  • Online conferencing
    Showcase your business and impress your target audience by utilizing online HD video conferencing and screen sharing features. Office 365 offers powerful communication tools like Lync Online, audio and video conferencing online making collaboration easier. You can even invite people or volunteers outside of your organization to join.
  • Secure File Sharing
    With Office 365, you get 1 TB of space for OneDrive for business. Share, store and Collaborate on information with volunteers or co-workers securely with OneDrive for business.
  • Best Productivity and Better collaboration
    Work together and smarter. Share Documents online with real time collaboration and coauthoring. You can edit documents simultaneously with co-workers, improving productivity and saving you precious time.

Non-Profit success with Office 365

  • Raise more and bigger donations
  • Connect staff, volunteers and listeners 24/7
  • Encourage creativity and collaboration
  • Transition seamlessly between devices, and Expand beyond geographic boundaries
  • Double their outside funding
  • Collaborate on important documents and reports
  • Create a more efficient workflow
  • Created work for more volunteers and increased their productivity
  • Increased productivity
  • Effective communication with stakeholders from anywhere

What we offer to Non-profits?

Choosing the right cloud solution increases your organization's efficiency, saves on technology costs, and fosters your best collaboration
    - Microsoft for Office 365 Donation initiative.


Eternus Solutions offers you its vast experience of Office 365 non-profits and helps you to accomplish adoption as well as the implementation to ensure and maximize the value obtained by providing the professional assistance you need. We can help your organization evaluate the costs and benefits of Office 365 to determine if it is the best solution for your business. We think that the cloud based solutions are most cost-effective and sustainable for non-profits.

We will work with you to ensure that you are benefited by the best discounts you’re entitled to.

Our services for Non-Profit include:
  1. Assessment and Planning
  2. Office 365 Implementation & Migration
  3. CRM for non-profits

For more information on Office 365 Non-profits visit Microsoft Office 365 Nonprofit site and sign-up for the free trial to get started.




Written by Aradhana Chindhade, Microsoft Technologies Expert at Eternus Solutions
Read More »

Monday 6 June 2016

Angular 2.0: A revolutionary angle!


Hello there! We are right nigh to the final release of Angular 2.0, which means it's time to decipher what this excitement means to the developers’ community and understand the business impact of this new framework backed by none other than Google Inc.

Angular 2.0: A brief history


At the ng-Europe conference on 22nd September 2014, the Angular Team announced its next major release, a.k.a Angular 2.0, disclosing that there would be a drastic change in the codebase and semantics. To everyone’s surprise, they also announced that Directives, Controllers, $scope and jqLite will be dropped off, clearly indicating towards breaking all the compatibility with existing codebase.

Understandably, this created a huge uproar amongst the developers, with the absence of migration path from 1.x to 2.0 versions causing a lot of confusion.

With the growing popularity of other frameworks like ReactJS and Ember in late 2014, the Angular Team decided to restructure their core architecture in order to meet the market requirements and build the best ever JavaScript Framework while ignoring the animosity from the community.

On 30th April 2015, when AngularJS was moved from Alpha to developers’ preview, it caused a big hullabaloo in the market to see this new code structure. As promised, many features were drastically changed and the compatibility with previous versions was totally ruled out. This resulted in some amount of disappointment. However, most of them were excited to ride a new wave in the field of front end technology which was about to begin. In December 2015 Angular 2.0 was moved to the Beta phase, pushing it closer to its final release. Once again AngularJS became the talk of the town for front-end developers!

Why we should not ignore this wave?

  • AngularJS is powered by Google Inc.(need I say anything more?) which further increased its community size, making it one of the largest used front-end development framework.
  • Increased community results in increased count of plugins and support which will grow this framework further. Very soon, it is expected to become "The Framework" which cannot be ignored while talking about JavaScript based UI development.
  • Although google has declared in ngConf 2015 that it will continue its support to 1.x version till the community continues to use this, in order to push its new baby in market, Google might take a decision of invoking the support after a stable release of 2.0. This might mean that no new features are released in the existing code base.


What benefits can Angular 2.0 provide to my Application?

  • With restructuring of $watch, overall performance of Angular 2.0 has drastically improved when compared with Angular 1.x. The team which built Angular 1.x is the same team working on Angular 2.0, opening up their weekly status meeting to the community, hence carrying their extremely valuable experience of previous version along with the constructive criticism from community, ensuring this framework becomes far better than the previous version
  • Binding technique in Angular 1.x was based on ng-modal. Instant change in Javascript POJO was one of the main features of Angular 1.x to make this framework very popular amongst the developer community. Let's see how it works.
    • Angular creates patch points for all the asynchronous interactions with the form on page load.
    • On these patch points, Angular runs a dirty checking on scope object to see if any of the associated variable is changed. If so, it triggers the corresponding watchers.
    • These watchers in turn synchronize the UI and modal value through multiple dirty-check and run-watcher cycles.
    • Problem with this approach:
      • It is not clear which watcher will be fired in which sequence. Additionally, it is difficult to predict the number of times these watchers will be called.
      • Digest cycle consumes considerably large number of resources so although this technique just about works, there is a great scope of improvement.
      • It is extremely difficult to control the sequence of model update cycles. Registering a dependent listener is very risky.
    • How Angular 2.x solves this problem.
      • Angular 2.x has adopted a zones mechanism which is equivalent to a local-thread model in any of the multi-threaded languages. This increases the data update to be more transparent and also avoids the need for a conventional digest cycle.
      • This blog gives more clarity on how Angular 2.x change detection system improves the performance over conventional digest cycle approach.
  • Many new integrated features, including flexible routing in Angular 2.0, have reduced the dependency on third party libraries. It has also dropped off the less used modules, understanding the real need of common UI development. These changes will make this library more lightweight and less error-prone. Below is a code snippet to understand how Angular 2.0 has simplified routing to great extent.

  • Development of Angular 2.0 version is focused towards the mobile platforms. The thought behind this focus is that it’s easier to handle the desktop side of things, once the challenges related to mobile (performance, load time, etc.) have been addressed.  The 1.x version was not designed with this approach. Although Ionic framework was helping the usage of angular 1.x on mobile devices, performance wise it was still very slow.
  • Angular 2.0 is based on Typescripts which provides compile time error handling. This feature helps developers understand the errors well in advance, reducing the bug fixing cost and make the development process almost twice as faster. Angular team is also planning to release this framework in multiple variants that supports Dart and plain ECMA6 as well.
  • Annotation support and improved dependency injection (DI) will reduce number of lines of code to improve maintainability.
  • Simplification of language syntax has reduced the learning curve of Angular 2.0 to great extent. However, a considerably good online tutorial can help you easily put this weapon in your quiver. This will help rapidly ramping up the team to get started. Given below is a code snippet of Angular 2.0  which appears to be quite simple for any beginner level developer compared to the code snippet in second window.

    Angular 2.0 code snippet

    Angular 1.X code snippet

  • Lazy loading was one of the biggest challenges in Angular 1.x. Yet some third party libraries like ocLazyLoad and requireJS had enabled this feature in angular 1.x. Ideally a framework itself is supposed to provide this lazy loading feature in order to maintain a better hierarchy of modules and corresponding files. Considering this limitation in 1.x, Angular team has addressed this issue in 2.x and have come up with considerably better lazy loading techniques. This blog gives a better idea about how Angular 2.0 has approached to solve this issue using asynchronous routing techniques.

Is there any bitter bite as well?

No technology is perfect enough to meet all the expectations of community. Angular 2.0 is no exception to it. Let's highlight some of the challenges associated with Angular 2.0
  • Huge number of applications have been built across the globe based on Angular 1.x framework. As the direct migration path is not given by angular team, migration of these existing projects in Angular 2.0 is going to be a real challenge for the community.
  • Although thousands of blogs and tutorials are already online, a real enterprise product experience of this framework is yet to come out.
  • Release date of this framework is still not fixed, resulting in an uncertain wait.


The Last Word…

There are still lot of things to be analyzed as Angular 2.0 framework is coming with a lot of promises for the UI developers community. Till the beta release, it has already fulfilled many of them. It is clear that Angular 2.0 is going to revolutionize UI technology to great extent and begin a new era of UI development.

Happy Coding…!!




Written by Mahesh Kedari, AngularJS Expert at Eternus Solutions
Read More »